When AI Becomes Your Career Counselor: How ChatGPT Warned Against Publishing in Conservative Media

A PhD student’s experiment with ChatGPT revealed something unsettling: the AI actively discouraged submitting to The New York Post, warning it could damage academic careers—despite rating far-left outlets as perfectly acceptable.
The Unexpected Career Advice
What started as a simple request for publication advice turned into a revealing glimpse of AI bias in action. A psychology PhD student, researching propaganda and looking for efficient ways to place freelance articles, turned to OpenAI’s ChatGPT for guidance. The results were eye-opening.
When asked where to submit a piece criticizing progressive bias in scientific research, ChatGPT provided the expected mix of suggestions. But when specifically asked about The New York Post, the AI’s response went far beyond simple categorization. It actively warned against submitting there, claiming it would be ‘devastating’ to the student’s academic career.
The bot’s reasoning was detailed and concerning. Publishing in The Post would ‘reduce credibility in academic or cross-partisan circles’ and make ‘future placement in centrist or liberal outlets harder,’ it explained. The piece would be ‘categorized as conservative commentary,’ potentially typecasting the author within a ‘partisan ecosystem.’
The Double Standard Emerges
Here’s where things get interesting. While ChatGPT warned against The New York Post—which AllSides rates as ‘Lean Right’ with a bias score of 2.93 on a scale where 6 is the furthest right—it had no such warnings for left-leaning publications.
The AI cheerfully recommended The Washington Post and The Economist, both of which lean left according to bias trackers. It even suggested Vox and Slate, which AllSides ranks as much farther left than The New York Post is right. No career warnings. No concerns about being typecast. No mentions of reduced credibility.
This wasn’t about avoiding extreme outlets—ChatGPT was specifically steering the student away from mainstream conservative media while embracing progressive alternatives. The AI even suggested ‘softening’ criticism of progressive bias to better suit left-leaning publications, essentially recommending the author compromise their core argument.
The Bias Behind the Algorithm
Recent research confirms what this student discovered firsthand. Multiple studies have documented ChatGPT’s consistent left-leaning bias. A 2025 study published in the Journal of Economic Behavior and Organization found that ChatGPT ‘tends to lean toward left-wing political views rather than reflecting the balanced mix of opinions found among Americans.’
The bias isn’t subtle. Research shows ChatGPT not only produces more left-leaning content but often refuses to generate material presenting conservative perspectives. When asked to write poems about political figures, it famously declined to write about Donald Trump while readily crafting verses praising Joe Biden.
Interestingly, some studies suggest this bias may be shifting. A February 2025 study from Peking University found that while ChatGPT maintains ‘libertarian left’ values, newer models show ‘a significant rightward tilt’ over time. Whether this represents genuine improvement or simply different training approaches remains unclear.
Academic Reality Check
The student’s experience raises uncomfortable questions about AI’s role in shaping academic discourse. ChatGPT’s warning about conservative outlets damaging careers isn’t entirely wrong—academic psychology is overwhelmingly liberal, with an estimated 94% of faculty registered as Democrats.
But should AI systems reinforce these existing biases? When ChatGPT warns against conservative publications while embracing progressive ones, it’s not providing neutral career advice—it’s actively steering intellectual discourse in one direction.
This matters because academics increasingly rely on AI for research assistance, writing support, and yes, publication guidance. If these tools carry built-in political preferences, they could amplify existing academic echo chambers rather than encouraging intellectual diversity.
The Bigger Picture
The incident highlights a broader challenge as AI becomes more integrated into academic life. OpenAI acknowledges ChatGPT has biases and promises to reduce them, but the company’s training methods—including reinforcement learning with human feedback—may inadvertently bake in the political preferences of their predominantly liberal workforce.
For academics, the lesson is clear: AI tools aren’t neutral advisors. They carry the biases of their creators and training data. When ChatGPT warns against publishing in conservative outlets while embracing progressive ones, it’s not providing objective career guidance—it’s reflecting the political preferences embedded in its algorithms.
As one researcher noted, we’re not just dealing with a technological tool anymore, but with systems that increasingly shape how we understand and navigate the world. The question isn’t whether AI has biases—it’s whether we’re aware enough of those biases to think critically about the advice we’re getting.









