My smartest friend once told me his secret trick to acing all his tests: he listened to movie soundtracks while studying. He reasoned that since these soundtracks were designed to steer your attention, they could help you focus while studying. I decided to try out his strategy and he recommended the Man of Steel OST by Hanz Zimmer. It didn’t work. In fact, it made things measurably worse. My reading speed decreased by at least 10x because I would keep losing my train of thought. I played other soundtracks, including Tron: Legacy and Interstellar, and while amazing, they presented the same issue. My headphones were emitting calm and alluring instrumentation one moment, and then suddenly blaring loud and jarring percussion the next. This made it nearly impossible to think. It’s a totally unhinged way to study, but it worked for him.
Music as a tool for focusing makes me view it as a safe alternative to something like Adderall. The latter easily crosses your blood brain barrier and has many side effects and contraindications. Music, by contrast, is chill. While music is obviously the more popular study tool, Adderall is only a stone’s throw away on any given college campus, especially before exams. The shared outcome of focus between music and Adderall sort of opens up the question of finding safer alternatives in other contexts.
ALT Ozempic (semaglutide)
The breathless demand for a magical fat loss pill has skyrocketed Ozempic’s sales. Millions of prescriptions are being filled each month across the United States and Canada. This is happening in spite of its numerous side effects and terrible user experience. For example, your Ozempic dosage is gradually increased each week, eventually resulting in a cost that could be as high as hundreds of dollars per month.
Music is to Adderall as X is to Ozempic. Let’s solve for X.
What about topical numbing cream? If you apply some on your gums and cheeks, you won’t be able to feel your mouth, thereby negating the mouth pleasure of eating, and that ipso facto reduces your appetite. It’s readily available in local drugstores and super cheap. If that’s not safe enough, you could also gargle Listerine or chew minty gum on an hourly basis to suppress even the most voracious of appetites.
(Not medical advice.)
ALT Heresy
It’s tempting to blurt out edgy or heretic things because it makes people think you’re a cool genius. Peter Thiel single-handedly pushed the idea of contrarianism so hard that it became wildly popular, which is a super hard thing to accomplish because it’s inherently contradictory. What actually happened is that people found a shortcut for rigorous thought processes: they just “[attach] a minus sign to what everybody else thinks”. Tweeting something against the grain can be fun, but it’s super dangerous. You could get cancelled or, worse, be called stupid.
You’re better off sharing the SHA256 hash of your heresy and only revealing it if you turn out to be right. This lets you have your cake and eat it too. For example:
echo -n "I think scooped bagels are terrific" | shasum -a 256
// 14ee0b756d1d371101fd1a40ddd72fb3c4ddf8435210dd3179adda736fbe6aab
You would share the hash on Twitter, and then sometime later, if scooped bagels ever become popular, you can reveal the key and revel in your genius.
ALT AGI Doomerism
It seems that every now and again, something strange comes along and spooks humanity. From the Cuban Missile Crisis to Y2K to COVID, we’ve endured many such events, some of which posed a real threat while others that were merely questions of uncertainty. I think AGI will eventually belong to this list, but I don’t think we need to be as alarmed as some the cognoscenti is urging us to be (i.e. the AI doomers). I have two litmus tests I use to gauge my level of AGI fear:
The Grass Blade Test. We haven’t replicated natural grass. A tiny blade of grass, as simple as it looks, can conduct complex chemical reactions, and is a much easier target to replicate than consciousness. Until we are smart enough to advance material sciences to the point of replicating grass, worrying about AGI feels premature. H/t Thomas Edison.
The Misnomer Test. AI was co-opted by brittle neural nets, so “real” AI had to be rebranded into AGI. It’s the same way in which VR was rebranded into “The Metaverse” or blockchain into “Web3”. Rebrands as such are often tells about the state of technological advancement in a field. If X is a world-changing moonshot that’s yet to be achieved, misnomering X and calling your product X-Gadget will help you sell a lot of gadgets. But that forces people to start making a distinction between the X in your X-Gadget and true X. AGI will predictably be misnomerized over and over until we get to a distinction like “artificial intelligence beyond any doubt” or AIBAD.
ALT Prompting
ChatGPT is incredible, but prompting isn’t easy and the results are often dubious. It’s like using Google Translate to assign a complex task to a foreigner who’s dropping acid. I write and rewrite verbose instructions until I get the result I want. But for most people, carefully crafting a prompt is a strange premise. For the last 15 years, digital design has been undergirded by a principle of minimal user effort: recommendation algorithms, form autocompletion, and process autonomy enable users to accomplish more with less. Instead of combing through categories of hundreds of movies, Netflix simply surfaces movies you might enjoy. Instead of typing out your billing information when making a purchase, your browser just prefills it. Instead of logging in and publishing a post, you can automate a posting schedule. Prompting is sort of the antithesis of this “shortcut” culture. I think ChatGPT is most popular among certain people like programmers, copywriters, students and others who are required (or enjoy) writing thoroughly anyway. But for the rest, prompting remains a laborious task, and potentially unsafe due to hallucinations. Instead of prompting, users should only have to select a highly specific tool and then enter some basic input (e.g. a tool for translating code only requires you paste your code and choose the output language).
ALT War
Instead of waging war, what if two countries could safely resolve disputes by having their best players compete in a game refereed by a neutral third party. Maybe it could involve the blockchain somehow, and then we could call this new system of bilateral agreements The Satoshi Accords to make it sound more legitimate.
Let’s imagine Canada gets into a skirmish with the US over the fact that American companies are drawing away too many Canadiam engineers. Diplomats from both countries would then elect the neutral third party—let’s say Latvia—and mutually decide on a game—let’s say ice hockey. Both countries would send their best ice hockey players to Latvia to compete against each other. If Canada wins, the offending American companies would be ordered to pay a retroactive tariff for each engineer who was brain-drained from the country. If either country reneges, Latvia gets half their GDP.
ALT Passwords
The bad UX of passwords causes lots of pain. There’s a number of anti-password solutions, but the simplest of all is using a hash. Come up with memorable seed phrase and never share it with anybody. For example, you might choose “frodo luke gandalf yoda”. Then to find your Twitter password, just compute the hash:
echo -n "frodo luke gandalf yoda twitter" | shasum -a 256
// outputs your password for twitter
As always, are some critical drawbacks to be mindful of:
Some sites enforce dumb password rules, such as character count limits and character requirements. This means you can’t use this technique everywhere.
Your seed phrase effectively becomes a single point of failure. It’s important to make it super hard to guess, but easy to remember. Ideally it should be a deep dark secret so you naturally won’t talk about it.
It’s worth exploring alternatives.