💬 Issue #14: Depression
Neither robot nor researcher is immune to the blues - so let's fix it.
Friday achieved. Let’s go.
Last week we covered how AI researchers are feeling massive pressure in the wake of OpenAI. This isn’t a new phenomenon to careers in any hot field, but right now the stress laser is zeroed in on AI academics as many of them feel they must urgently choose between science and profit.
In a recently published academic paper titled “Choose Your Weapon: Survival Strategies for Depressed AI Academics” I learned that it is actually possible to make even a university research paper both 1. acutely useful and 2. hilarious, so I summed up all ~6,000 words for you here:
(And no, I didn’t sum this up with ChatGPT. I tried but it didn’t like the input and I run away from hard things.)
The problem, as stated: You used to be able to do AI research on a couple of GPUs in a lab, but massive computing power, massive datasets, and massive megabucks objectively make for better results. So how do we compete with a college budget when OpenAI has hundreds of millions of dollars?
Some solutions: (Now, pay attention, this isn’t just AI talk. This is “how to survive in any super competitive industry” stuff.)
Give up! – Ha. True to the title re: depression. You can always throw in the towel and “give up on doing things that are really impactful.” But, psych, this one is actually a challenge because you’re already in the game, so you might as well keep going. You signed up to do hard things and you’re more than capable.
Try scaling anyway - AKA “Let’s go tilting at windmills!” The representative amount of money presented here as the average sum available to university researchers is $50k which can now be efficiently spent on cloud computing vs. say, bolting together a bunch of gaming PCs to run your models. Can you do a model as large as ChatGPT? Hell naw. Can you do something? Absolutely.
Scale down - Focus on “toy” problems, something simple yet representative. Take your big meaty huge project and find the smallest important thing you can pull out and test individually. The media loves huge projects, but we’re in the progress business, aren’t we?
Reuse and remaster - Or, as one plucky newsletter author will say, steal like an artist. There are tons of open models out there just for you! Adopt large parts of what works and focus on your thing. Don’t be allergic to “Not Invented Here” syndrome.
Analysis instead of synthesis - Take an existing model (or whatever the large work unit in your field is) and analyze it vs. trying to come up with a whole one on your own. We have lots of new models that produce lots of incredible outputs, but we don’t really know how they work. It’s a black box. So be a pal and figure it out so the rest of us know.
“RL! No Data!” - Translation: Large Language Models (LLMs) need massive quantities of data and massive truckloads of money to complete whereas Reinforcement Learning (RL) needs only the latter. (Nothing is free, little one.) Not needing massive data is a wonderful thing if you’re working to make some progress.
“Small Models! No Compute!” - “Think of the smallest possible models that are capable of solving a problem or completing a task.” This is a great takeaway for any work ever. You probably have a vision in your head for this big awesome thing that’s going to snap your industry over your knee because it’s just so cool and huge, but lots of things still succeed on a much smaller scale. Things like “Edge AI” just live in the moment so no dataset is required. Don’t write this one off just because it’s nerdy AI stuff – you probably have ways to simplify your own thing too.
Work on specialized application areas or domains - aka niche down and win. Find something too small for mega-industry to care about.
Solve problems few care about (for now!) - Pull your head up and look around. What fields or sub-fields aren’t sexy yet but have potential in your mind? What do normal non-researcher people care about in the real world? That’s where the next horizon lies.
Try things that shouldn’t work - Big company must do thing that always work. So try thing that probably definitely not work and maybe win by surprise.
Do things that have bad optics - Basically, the bigger a company is the more they care about how something looks to the media and general public. So just limited your constraints to “the law and your own personality” and do some wild stuff. The field is far, far more open for you to create than it is for a stuffy corporation obsessed with PR.
Start it up; spin it out! - Classic tech academia-to-industry path: crack the nut and spin it out of your lab. Commercializing your research gets you out into the field and possibly towards the resources you need to do that Big Thing you feel you must do. Will this screw with your research career? Probably. But if you’ve got the mettle, this is a major option.
Collaborate or jump ship! - Partner with a big university or organization to make your dreams happen. Free lunch is a powerful drug and mid-level executives are not immune. Charm and reason your way into those compute cycles.
How can large players in the industry help? - See above.
How can universities help? - See above again. Lunch. Charm.
To summarize this whole paper it’s basically this: Relax a little. Keep it simple. Keep it moving. You’re in a wonderfully fruitful position if you understand AI anything right now. The resources are out there in many forms, you may just need to ask nicely.
But really you should just go read the whole paper here, it’s great: https://arxiv.org/abs/2304.06035
Good luck out there, AI researchers (and the rest of us mere mortals.)
ON THE INTERNETS
TWEET OF THE WEEK
Benadryl be like, you got allergies? No prob, here’s a coma.
— Missy Baker (@TheMissyBaker)
Apr 4, 2023
See ya next week
— 💬 The EiT Crew at Status Hero
WHAT'D YOU THINK OF THIS WEEK'S ISSUE?
Let it rip, we'd love to hear from you. Click one: