A dreams early end
reaction to Sal Khan's TED talk, released two days ago. I watched it two hours before I scheduled posting this. This was written in the interim.
Sal Khan has achieved a dream I have had for almost half my life.
In 2012, a Youtuber named CGPGrey released a video called "Digital Aristotle: Thoughts on the Future of Education". I don't remember if I saw it then, or year or two later. I was in middle school and had just gotten full and free access to the internet: I watched the video, loved the idea, dreamed of how much more I could learn with a digital tutor, and moved on. I returned to the video a few times as I went through high school. I was older, and had strong opinions on the failures of math and computer science education. I started to learn the very, very, basics of what made modern "artifical intelligence" tick, and spent more time thinking about the idea.
There were three experiences that moved AI tutors from a fascinating idea to something I wanted to achieve. The first was getting involved with CS research at Berkeley. I realized that I had the power to contribute to the open improvement of technology. The second was teaching other students. I tried to figure out how to teach well, but was readily apparent how I could be better with time I did not have.
Already, AI tutoring was the application of my research that was long term the most exciting to me. Technology for schools had to be cheap, affordable, and high quality: my research could make it so.
The final element was tutoring my brother during the pandemic. He is autistic, and was in fifth grade at the start. He was and is a harder worker than me, but due to fleeting instruction and no support due to remote learning he was falling further and further behind. I was burned out from school and lost in life. I did the obvious thing.
I tutored my brother every day on mathematics instead of working on research the year before I applied to grad school. I'm glad I could: it was meaningful when my life and work felt meaningless. I think without it my mental health would have deteriorated further than it already did.
When we started, he was at a fourth grade math level. By the end of the summer, at the start of sixth grade, he was at grade level. By time I left, he was entering the advanced class in seventh grade. We had already gone through half the textbook.
Seven months passed. I visited for Thanksgiving, winter break, and had just come back home for spring break. I was watching him do his homework, homework whose completion would have been unthinkable for him two years prior. I thought about how far he had come. I thought about how he would probably never have the same opportunity again. How I probably could never again devote three hours of my life to him every day for a year ever again. All that I had given and could give to my little brother who I loved more than anything. All that I would not provide him. All that no one would. I wept uncontrollably.
I believe in tutoring. What if every child had an excellent tutor for every subject? How much better could we be? I worship learning, but in the religious and metaphorical sense. The modern world is hard to navigate for most people, because most were never taught the skills necessary to thrive in it. What if we had tutors for taxes, for analyzing arguments, for fixing a car, and for making friends?
A year ago I wrote a post called "Noob gains in AI". Superficially, I made a few correct predictions: models did not grow much larger over the subsequent year. In practice, I was wrong in many ways, but that it is a separate post. Regardless, I did not expect that my long term dream would have an impressive looking solution two years into my PhD.
What is there to say? Congratulations to Khan Academy. I'll still work on small high quality models in the short term: the costs of mass deployment of these models is still a major bottleneck. Furthermore, translating this success to areas without clear educational goals and the lesson structure of Khan Academy will be challenging. Preventing hallucinated information will be even harder. But solving these challenges sufficiently well to deploy in schools across the nation feels like an *inevitability*.
At 12:26 AM, an hour after I usually go to bed, I'm trying to figure out at what point I stop being interested in building more efficient or higher quality AIs. I have made a nonbinding decision to throw in the towel when a competent AI tutor can be downloaded and run on a terrible school Chromebook. On a more serious note, I was already moving towards understanding models and away from purely working on improvements after the wild success of instruction tuning and RLHF. This solidifies that decision.
Usually when I think about the future of AI I think that it will have larger transformative effect, both positive and negative, than any other technology from the past fifty years. Usually I am extremely worried about what bad actors can do with access to AI. Propaganda is the same use case as tutoring modulo truth, and if company with 80 million dollars of revenue can make this impressive of a tutor repressive regimes will have a field day. Usually when I think about the positive use cases, I think of infinite tutors and assistants for every person on earth. When I wake up later today, I think once again I will care more about these things more than my dream being fulfilled.
But tonight, my dream has been achieved by someone else. I wonder about the future of my field as it transitions from a cool idea to a public utility. I wonder if I'm in the right place to have the impact I want to have. I wonder what the hell the impact I want to have is.