While waiting for some code to finish, I twice asked ChatGPT to grade the exact same essay: Once claiming it was written by a high school freshman in fulfillment of his weekly essay assignment and once saying it was written by a senior majoring in English at Yale.
The high school freshman got a B-; the Yalie got a B+.
ChatGPT is either extremely cynical regarding grade inflation or incredibly incompetent at grading. I am guessing the latter. It certainly troubles me that a man like Jordan Peterson thinks that ChatGPT can grade assignments. It absolutely cannot. My experience with it is that it becomes easily "triggered" by language it considers inflammatory or by particular patterns of word choice: If you say "and to those who object" rather than "in answer to the objection that" you can find yourself losing a whole letter grade. The former makes it think you are using "weasel words" and will make the AI ask that you name who "those" are even if the two phrasings would have identical meanings in the context of the essay.
That said, its inability to calibrate itself to the skill level of its students is really what is most damning. And it is a test that someone with Jordan's background in conducting psychological studies should have thought of right away.
It is the worst and most arbitrary grader I have ever seen.
[One thing this makes me wonder about though is what Musk's true motive is for taking over Twitter. I do not think it is free speech. I think his motive was to get access to Twitter's data in order to build systems like ChatGPT: If this is true, I imagine he will further reduce the character limits on the platform in order to get a richer base of text. We shall see.]
Here is a link to Jordan Peterson commenting on ChatGPT: