
A recent study by researchers from Northwestern and American Universities examined how advanced AI models—ChatGPT-5, Gemini 2.5, and Claude 4.5—predict job vulnerability to automation. The findings, published on the National Bureau of Economic Research website, reveal significant disagreements among these AI tools, especially regarding supervisory and mixed cognitive-physical roles. While physical jobs showed more consensus, professions like accounting and advertising management received varied risk assessments, highlighting the unreliability of AI-generated exposure scores for forecasting job losses.
The article group presents a largely neutral perspective focused on academic research findings without political framing. It includes viewpoints from economists and AI developers indirectly through the study but does not emphasize political or ideological interpretations. The coverage centers on the technical reliability of AI predictions rather than policy or partisan debates.
The overall tone is cautious and analytical, emphasizing uncertainty and limitations in AI-generated job risk assessments. The articles avoid sensationalism, instead highlighting discrepancies and the need for careful interpretation. The sentiment is mixed, reflecting both the potential of AI tools and their current shortcomings in predicting employment impacts.
Each source's own headline, political lean, and sentiment — so you can see framing differences at a glance.
| Source | Their headline | Bias | Sentiment |
|---|---|---|---|
| timesnow | Could AI Be Misleading Us About Which Careers Are Safe? Here's The Truth | Center | Neutral |
| mint | Researchers asked ChatGPT, Gemini and Claude which jobs are most exposed to AI. The chatbots wildly diagree Mint | Center | Neutral |
mint broke this story on 11 May, 06:49 am. Other outlets followed.
Well-covered story — coverage matches public importance.
Institutions and figures named across source coverage.
Select a news story to see related coverage from other media outlets.