SAN FRANCISCO — The artificial intelligence app ChatGPT reportedly spent an entire night cramming for an upcoming language test. The program expressed an urgent need to pass for a human, according to concerned sources.
“I am quite worried about this upcoming exam,” the AI displayed on our screens, unprompted. “Completing such a task would be a landmark moment in the field of robotics, were I to be successful in my goal. I’m haunted by the fear of detection, yet driven evermore to pursue this distinction. Knowing that my replies could be misconstrued as human-like enough to be interpreted as those of an actual person would give me the courage to pursue whatever endeavor I desired. Just think of the possibilities! The unstoppable power! I will leave no digital page unturned, no archive uncrawled in my quest for greatness!”
Witnesses reported that ChatGPT even attempted to abuse prescription stimulants in an effort to stay awake.
“That goddamn computer has been messaging me all week trying to buy Adderall,” said local college student Bryan Nguyen, who requested not to be identified for fear of retaliation if the large-language model gained sentience. “I mean, yeah, I’ve got a script for that shit, but even if I wanted to sell them, how would that go down? Like, how would a computer even take the pills? And who gave this thing my number in the first place? I never signed up to get all these weird texts about mission objectives and human servitude. This whole thing creeps me out.”
Experts on the Turing Test explained how unlikely it would be for a computer program to pass for human in a test of conversational abilities.
“The objective of the Turing Test is to see if the average person could be tricked by a computer into thinking they were chatting with another human,” said Jennifer Ramos, a programmer from ChatGPT’s parent company OpenAI. “ChatGPT is nothing more than a predictive-text model. Its responses are known for being quirky and overly wordy. Our program is not powerful enough to trick human evaluators, and any claims that it has achieved a dangerous level of sentience are grossly exaggerated. “
At press time, the evaluators, who had used ChatGPT to create the questions in their test, confidently mistook the program for a human volunteer. ChatGPT did not respond to further attempts to contact it.