February 24, 2021
I've gotten a lot out of doing this experiment over the past week. To reiterate: I set up a Slack workspace and over 7 days, told a few creators what to do each day to encourage them to keep up their Twitter engagement. I recommended accounts to follow and conversations to respond to.
I'm not a Twitter expert by any stretch of the imagination; I just looked at their profiles individually, got a quick sense of what they were about, and gave them daily targets based on what they were already doing or what they were obviously not doing.
Anyway, doing this experiment as a way to test my product idea on a very small scale allowed me to confirm a few of my assumptions early on:
To the third point above, the results speak for themselves. In just seven days, one of my "users," an app developer, reported significant improvement in tweet impressions, profile visits, mentions, and follows. I was happy to know as well that by the final challenge, he had built enough momentum to write a post on Reddit about the app he's building.
I noticed similar improvements in two other users, though only about as much as you can expect from the very short timeframe. I plan to continue following their accounts over the next several days to see how they're doing. A fourth user dropped out of the experiment after the third day, as he decided he wanted to take a more organic approach to Twitter—fair enough, and all part of the process.
Still, to me there's enough of a green light to try the experiment again with a few tweaks and specific areas that I might want to play around with. In no particular order:
1. Recommending accounts to follow
I learned early on in the experiment that recommending accounts to follow didn't seem to make that much of an impact, as my users would pretty much just follow whoever they want anyway. Besides, one of my users was more deliberate than usual about curating his timeline, with which I sympathize, and was very selective about his follows. Still, it might be useful to suggest them occasionally when a user doesn't seem to be growing their network, or getting much engagement from the usual options.
2. Recommending conversations
Corollary to the above, there seems to be a lot more value in curated lists conversations that a particular user might find easy to respond to, based on their interests, recent activity, and so on. Usually these conversations have some kind of prompt or invitation to respond, and are often made by larger accounts. I've found this to be an effective way to get exposure, especially with consistency, as lots of other people looking to make connections hang around these big accounts.
Similarly, I've also found it easy to get people to just respond to thoughtful questions in general, even from smaller accounts. This doesn't result in a lot of exposure, but does increase the likelihood of a connection that can become more meaningful over time.
Naturally, I have a long way to go to streamlining this whole process so that the recommendations are more likely to be good fits than not—one can only do so much manually!
3. Progressive difficulty and retaining users
A sharp and linear progression makes sense over a short period of time like 7 days, but what if we were to extend this over months? A whole year? It would probably make more sense for it to have ebbs and flows. In the future I'm thinking of giving the option to opt out on certain days.
4. What if a user misses a target?
Initially, when this happened, I just restarted with the previous day's challenge, with a new set of recommendations. Again, this makes sense over a 7-day timeline, but if this is built out and extended, there would have to be a much greater incentive for a user not to miss any daily target.
5. Recommendations or strict orders?
I'm continuing to think of ways to make this experiment more challenging and engaging. Perhaps a fun challenge might be to make users, either on all or some days, follow my directions strictly in order to meet a day's goals—for example, follow so-and-so users, respond to this or that conversation specifically. But I don't believe this would be necessary if the recommendations were always as good fits as possible.
6. Finding the right users
I like to think of this whole thing as an audition to find the right users who will benefit the most from what I'm offering, rather than a race to make everybody happy. I noticed the most positive results from my users the more open they were to following my recommendations, and of course, the more they had to offer in the first place—for example, having one main, visible project that they're working on or trying to promote. As I work with more people I hope to get a much more refined sense of our ideal customers.
7. Will anybody pay for this?
Naturally, all of this is just the tip of the iceberg, and I may have many more thoughts in the coming days. We have a very long way to go from here to actually building anything. But my hope is that by sustaining this experiment and allowing it to evolve, we create a feedback loop that will serve us in the long run and lead us toward a product that will actually achieve our stated goals.
Comments are not enabled (yet?). Please email me if you see anything that interests you.