Artificial intelligence (AI) is a field of computer science that has rampantly increased in popularity. The ability for a machine to be fed information and "think" on its own has been fairly commonplace, but 2022 saw the rise of two major machine-learning-based tools that continue to encapsulate the Internet with their remarkable abilities: DALL-E 2 and ChatGPT.
|
|
These two projects were created by the same company, OpenAI, and their mission statement is as follows:
While this company has seen major success in 2022, the rise to this point has been gradual for OpenAI. The research sector's first public project was in 2016 and described their progress on Generative Models. This is noted as the ability to feed in large amounts of data and then train a model to generate similar data from it. In 2018, the company became to scale its AI training to give the model more information to work with and, in February 2019, the company started to work on a model known as GPT-2 which was trained to predict what word would come next based on 40GB of Internet text. This, along with many other projects that OpenAI conducted, laid the foundation for tools such as DALL-E 2 and ChatGPT to be able to synthesize new information from a given prompt.
With an almost magical ability to simply take a text prompt and return a completely generated product, it is evident that these tools have both positive and negative effects.
In the education field, tools such as ChatGPT have made plagiarism detection efforts more difficult than before. While it was previously feasible to detect cheating by seeing if the submitted text has been seen elsewhere, ChatGPT synthesizes new information by resembling human language. This makes it nearly impossible to truly detect plagiarism in an academic setting. Essays can now be theoretically written without any effort, and programming projects that would need to take weeks can be accomplished in minutes. However, it should be noted that there are early efforts to detect whether written works were plagiarized using AI. Edward Tian, a computer science student at Princeton, has created an app called GPTZero which uses similar training data as ChatGPT to see if an essay was AI-generated. In an ironic sense, it seems as though artifical intellgience is the key to combatting the repercussions spurred by artificial intellgience tools.
Furthermore, AI tools such as ChatGPT are also not entirely accurate. Teresa Kubacka, a data scientist, asked the tool about a physical phenomenon that was made up: a cycloidal inverted electromagnon. ChatGPT confidently explained it and was able to provide specific details as to how it can be created and utilized. As a machine that is trained on human data, it inherently will make "human-like" mistakes. Such an idea raises concern about how trustworthy AI can be and, in a way, it illustrates the notion that AI has a long way to go before it becomes the all-knowing machine that it is classically known for.
While ChatGPT is not entirely accurate, it is still knowledgeable on virtually any topic and can serve as an immediate teacher. One could argue that the current information resources available to people (e.g. Google, other people) are also flawed and inaccurate occasionally. From this, ChatGPT is no different from other ways of receiving information. Where it shines better than other traditional forms of learning is its ability to act as a natural conversationalist. ChatGPT communicates like a human, and this makes it easy for users to ask it follow-up questions on topics or clarification on certain messages (a concept that would take more effort and time with a usual search engine). While it is long before CHAT-GPT can be a replacement for a teacher, it is useful right now as a semi-reliable tutor.
For DALL-E 2, the ability of an AI to create a new image based on a few input parameters leaves the art community in disarray. When photographs were introduced, art began to shift to new heights as photorealism was now quick and easy. With DALL-E 2, art is in a limbo state. This AI tool has to ability to mimic any artist, style, and idea. It can come up with a new art medium if someone had the right inputs. With this, artists fear the future of creative expression and ownership. Similar to the plagiarism associated with ChatGPT, how will people know if certain pieces of artwork were made by a human or AI? How will artists find work if a machine can achieve the same work in less time with no pay?
While art is in a confusing state with DALL-E 2, the AI is not a perfect artist. Humans have an innate ability to understand how the physical world operates and how objects should interact with each other. DALL-E 2, on the other hand, does not truly understand what it is making. Therefore, it makes mistakes in the location of objects. The photo below depicts a plant store, but DALL-E 2 mistakingly placed plants that phase through the doors and windows of the store. With this, AI art could be seen more as a powerful tool for artistic inspiration. Designers could use AI-generated images as a reference for their actual work, and mistakes that the tool might make can be ironed out by a human. AI art tools can spur collaboration between artists and could potentially shift the art culture in a way akin to what photography did. However, in order to truly achieve this, these tools must be responsibly implemented in order to ensure that humans do not lose what makes them "human".
As noted above, AI companies such as OpenAI should actively work towards implementing more protective measures with these tools. In terms of current efforts, OpenAI researcher Scott Aaronson has stated that the company is working on methods of "watermarking" the text generated by Chat-GPT with a hidden signal that can be later tracked for plagiarism. This type of tool could most likely be implemented into the images produced by DALL-E 2 as well.
In terms of performance, OpenAI expects that its speech generation model will surpass plain text models in natural language processing. The CEO of OpenAI, Sam Altman, explained that multimodal (meaning able to generate text, visuals, audio, etc.) models are currently less advanced than plain-text models (only text), but that they will be better than plain-text models in the coming years. Furthermore, he sees a 50/50 chance that AI will replace several human tasks by 2035. It is certain that AI will have an enormous impact on how people interact with technology in the coming years. The Internet was once a limitless free-for-all, and it required extensive protections to ensure the safety of its users. With the right trajectory, AI will follow this trend and be a tool instead of a hindrance.