Just imagine that I fed these notes to ChatGPT and had it edit the bullet points into something readable.
(source)
AI today is just pattern-matching.
Stable Diffusion and other "generative art" models are trying to reconstruct a probability distribution (given an input prompt). Style transfer is possible, but it requires being fed training examples in that style to begin with.
Large language models predict the next token in a sequence based on what it's seen before.
Notably models perform worse when combining very disparate ideas - both from the perspective of style + subject pairs ("boxing kangaroo" is not at all in the training set, but is a concept humans can imagine, and for things outside social norms (generated pictures of women tend to, shall we say, fit the male gaze).
Generalizing outside of that distribution requires "creativity" - inventing new concepts or at least pursuing uncommon ideas.
Then again, a lot of what humans do is just pattern-matching.
Many jobs have little flexibility / human judgment applied at the line level (and the very notion of a "line" is that each station has a very specific role to play). When a situation requires judgement, it also necessitates a "I'll need to speak to my supervisor."
Our brains perceive the world, and also make a prediction about what's going to happen next. When you try to catch a ball, you watch where the ball comes from as well as its angle and velocity to predict where it will land - just observing where the ball currently is would not be enough to know where you need to move to.
Why do programmers build frameworks and design libraries? Because most of the process involves repetitive work and only slight variations on a theme.
I think we're at a state of 'automating lots of low-agency people' rather than 'automating high-agency people' -- this is a fairly consensus view now IMO but I'm less aggressive on timelines than the Berkeley AGI folks, and think there need to be at least a few more step-function paradigm shifts before we get to that point.
So it's interesting to me the conditions necessary for "creativity" - I think most artists require a certain level of mental illness to be successful: not just the narcissism to believe that the world needs to hear/see/read your contributions to the world, but also a certain level of derangement to go outside of social norms.
An AI trained with data up until 1917 would never put a urinal on a stand and call it art. What is the AI equivalent of cutting off your ear? (probably something like gradient starvation)
"Hacks" just copy what's popular now and make a fascimile of it. That's kind of where AI is now, to me. It's kinda like Netflix's series development process, start with a list of things consumers like and backsolve for everything else. Hence how you get a bunch of series that sound fine on paper but have no real soul.
Most of the gains from AI companies now will accrue to big tech.
AI startups doing first-order ideas will fail to win the distribution and UX battle versus incumbents. Microsoft is bundling AI into Teams and Bing - and while it's still possible for somebody to build a better product than these, distribution is always king.
Bing in particular is interesting, Bing has claimed for a while its algos are better than Google's (an old study), but even if that were true, Google is still the default on the main search surfaces (Chrome and iPhone), for which it pays dearly.
It's not enough to be 2x as good - you need to buy 10x as good against an incumbent with a distribution advantage.
Microsoft's pricing for Copilot and Teams is instructive - high enough to discourage free customers from burning GPU cycles, but low enough to discourage startups from competing heads on.
No consumer cares about your app's [AI, crypto, ...] - they care about how it feels
I don't think any consumer apps should/will brand themselves as the AI version of something. This relates to the previous point: a new AI-powered Zoom startup is dead on arrival. Zoom will just build AI in, and then where are you?
One obvious defense against big tech is aggregation, instead of building AI for Slack, build AI for all your documents (Slack, Google Docs, Notion, ...) all in one place. This is a pretty obvious idea though, and the challenge isn't really using AI so much as getting everything into one place in the first place! But hey, if YC is willing to fund 6 clones of Glean, what do I know.
The playbook for competing against big tech is the same as it always was: look where they're not looking
Why could Zoom successfully compete against Cisco WebEx? Because Cisco was not investing into forward development of the product and instead trying to improve margins on sticky recurring cash flow, so Zoom could create a 10x better product.
Problem is, big tech is looking at integrating LLM's. The news has made a bunch of hay out of Google pulling a "code red" - which is probably mostly theater, but still, seems unlikely that a startup's AI powered word editor is going to beat the inevitable AI integrations into Office and Google Docs.
In fact, the correct play is probably to wait for those features to show up (and be untoggleable) and create a dumbed down, privacy-focused version without AI.
There will always be a market for things "made by humans"
See also: Post-hyperscale
AI can drive marginal cost of lots of things to zero, but people enjoy things for plenty of reasons beyond just existing (or even efficiency).
Example: you could pay ~$0 for an AI generated portrait of you, but there will still be a market for a human-drawn portrait. If everyone has an AI generated portrait, then it's a status symbol to have the more inefficient and expensive hand drawn portrait, even if the quality is 'worse'
To err is to human; I think we will see a movement against perfection in machine-generated designs and work. The current RLHF paradigm pushes ChatGPT and similar models towards 'broadly acceptable' answers, taking away the unpredictability of the models (and human nature).
Without linking, for obvious reasons, suffice to say there are models which can generate mostly realistic looking pictures of conventionally attractive women in various states of undress. I'm not sure who the target market is, the whole point of conventional beauty norms is that many many such photos already exist! There is no feeling to these pictures, or, put a different way, "everyone is beautiful and no one is horny".
These models are, like the chatbots, designed to appeal to everyone and subsequently nobody, at the same time. But the superpower of the internet is that infinite scale enables niche communities to find each other, and that successful usage of the models will be tuned to more niche tastes. I can imagine niche communities fine-tuning models to fit their desired aethestics or writing style; lower fine-tuning costs compared to foundation models makes this quite reasonable.