Inspired by the example of my esteemed colleague Benjamin Voyer, I did the following experiment: ask ChatGPT to write my biography, Wikipedia style. Like Ben Voyer, I was also very surprised by the result.
As regards form, it is perfect. It also shows that language doesn’t matter: the same query, in French or in English, gives exactly the same text. In other words, ChatGPT seems to have an internal language, understood only by him/her, which serves as a common basis for responses in different languages.
However, the substance is a different matter. The following table gives a visual understanding of what I mean:
To build up on my esteemed colleague’s analysis, it’s not as if this information were hard to find: the vast majority of it is public, present on many sites, and even if I’m not present on wikipedia (quite normal, IMO), there is clearly enough public data to write a proper wikipedia entry.
With such a list of errors, it is surprising that ChatGPT has correctly identified my field of activity. The Augur in Information (AI) says « French economist and finance professor »: I’m not an economist, but hey, it’s at least reassuring not to have been classified as a « Youtube influencer » or « septic tank drainer ».
As a conclusion to this little exercise, a fundamental observation: while ChatGPT can be very accurate (and helpful) when asked pragmatic questions – for example about how to use an Excel function – it is not the same when it comes to news or people. It is not a search engine, it is a conversational assistant. In short, don’t be surprised if it tells you fairy tales…
PS: I suggest you do this experiment on your own name. You can even publish the result on social networks with the hashtag #ChatGepetto, we’ll see whether this starts a trend…