Quora relies on machine learning. No surprise there. Any major service that accepts user-generated content needs more than human moderation to keep up with the influx of submissions. Algorithms help filter out bad content and promote good content (or at least content that increases user engagement). But this year seems to be an inflection point for algorithms that not only moderate content, but also create it.
Here are some examples of machine learning technology that has gotten attention this year, plus a couple of Quora-specific bots.
Quora Prompt Generator
Quora believes they have a question shortage. Their theory is that there are useful answers that haven’t been written yet because no one has asked the right questions. Four years ago, they started an initiative called the Quora Partner Program, where they paid users to ask questions. That program was discontinued (for English language Quora) this year, and we now have the Quora Prompt Generator (QPG), a bot that generates questions with no human intervention.
ChatGPT
OpenAI released their ChatGPT chatbot to the public on November 30th. It quickly became so popular on Stack Overflow that moderators there have temporarily banned it. The problem is that ChatGPT makes it easy for users to generate large amounts of content that looks reasonable but which the author hasn’t verified. It’s true that human authors can also generate questionable content. But when humans write low-quality content, it often comes with signals like poor grammar and spelling, or code that doesn’t compile. The ChatGPT technology eliminates these signals, so it takes longer to identify problematic answers.
Quora doesn’t focus on objective truth the way Stack Overflow is, so they are less worried about ChatGPT. Although users could overwhelm Quora moderation if they submitted enough machine-generated spam or other noise, Quora systems can handle much more incoming content than Stack Overflow, because they have different standards for what is acceptable. When people notice low-quality content on Quora, the official response is always to report, downvote, and otherwise avoid interacting with it. In other words, handle it without bugging the moderation team. Quora moderation deletes questions and answers much less frequently compared to Stack Overflow. Not surprisingly, Quora has no official policy on ChatGPT.
Poe
On December 20th, Quora announced a feature called Poe (Platform for Open Exploration) that sounds a lot like ChatGPT. It’s currently in an invitation-only status, so I haven’t tried it yet. But according to the announcement, it will give Quora users an option to get bot-generated answers, just as Quora’s QPG provides bot-generated questions. Maybe we’ll even get a complete loop of bot 1 (QPG) asking questions, and bot 2 (Poe) answering them.
DALL·E
On September 28th, Open AI made their DALL·E image generation beta available to any interested user. The image at the top of this post is DALL·E’s interpretation of “A cyberpunk illustration of a computer programmer at a computer in red and green.”
AlphaCode
On February 2nd, DeepMind announced AlphaCode, a coding engine that can solve Codeforces problems at about the level of the median Codeforces contestant.
The Outlook for Humans
With so many new bots appearing this year, you might wonder: “Should humans be worried that AI will steal our jobs?” Here’s what ChatGPT says:
There is no denying that AI has the potential to disrupt the job market and lead to significant changes in the way work is done. Some jobs may be automated or replaced entirely by AI, while others may be transformed by the adoption of AI technologies.
That being said, it is important to recognize that the impact of AI on the job market is likely to be complex and varied. Some jobs may be lost to automation, while others may be created as a result of the development and deployment of AI systems. It is also possible that some jobs will be transformed by the adoption of AI, with workers taking on new roles and responsibilities as a result.
As it often does, ChatGPT answered this question using a “this, but on the other hand that” structure. But it’s actually a reasonable answer in this case. If AI affects the economy the way other types of automation have throughout history, it will cause a mixture of disruption and opportunity. For knowledge workers in general and programmers in particular, it would be wise to think about ways to harness these new technologies, rather than worrying about which human skills they will replace. Programmers like to automate things. Rather than editing a text file by hand, we like to use regular expressions or scripts to make changes. We should think of ways to use these new machine learning tools in similar ways. We already have email clients that suggest the next word to write, and code editors with sophisticated autocomplete systems that can suggest an entire block of code. The next step is to use language model tools to write an entire email, or an entire function.
If a language model is writing our email and our code, what’s left for us to do? The answer involves the aspect of human cognition that seems to be difficult to simulate in software: general intelligence. While GPT-3 can generate text, it can’t determine if that text is appropriate for your needs. So you still need to read the email and see if it makes sense, and you need to test the code to see if fulfills your client’s requirements. Think as yourself as the supervisor of a junior associate who can finish their work quickly, but who has limited common sense. This is just another example of leveraging the strengths of your tools. Spreadsheets never make calculation errors, but they have no way to tell you if your input data is correct. So let the spreadsheet do the calculation and use your human abilities as a sanity check of the result. The tools we have now are more sophisticated, but they still need to be paired with our uniquely human ability to see the big picture.