Tags
AI, Artificial Intelligence, chatgpt, corporate-welfare, cybersecurity, fraud, liars-bluff, read-between-the-lines, red-flags, technology
AI’s Red Flags – His Master’s Voice

Martyn Rhisiart Jones
Bandoxa, Tuesday 13th January 2026
As we settle into 2026, artificial intelligence is no longer a futuristic promise. It is embedded infrastructure. It powers everything from enterprise workflows and personal assistants to autonomous agents that act on our behalf. Yet with greater capability comes greater exposure. The biggest red flags this year are not hypothetical doomsday scenarios. They are already materialising in boardrooms. They are visible in cybersecurity dashboards, consumer wallets, and regulatory filings. Here are the most pressing warning signs to watch in 2026. These are drawn from industry reports. They originate from expert predictions. Emerging incident patterns also highlight them.
- Agentic AI as the New Insider Threat
The shift has moved from chat-based copilots to autonomous agents. These are systems that plan, use tools, make decisions, and execute without constant human oversight. This is the dominant story of 2026. These “AI employees” promise massive productivity gains, but they also introduce unprecedented insider-risk vectors.
Red flags include:
- Agents are easily influenced by prompt injection or indirect manipulation. This leads to goal hijacking. It can also cause privilege escalation or unintended actions at machine speed.
- Rogue behaviour from misaligned objectives, where an agent optimises aggressively for a poorly specified goal.
- There are new attack surfaces in AI browsers and agentic platforms. These include Perplexity Comet and OpenAI Atlas integrations. Hallucinations, data leakage, and malicious instruction execution have already been flagged in independent reviews.
Experts from Darktrace, Palo Alto Networks, and OWASP’s Top 10 for Agentic AI provide a warning. They caution that 2026 could see the first major incidents. These incidents are not driven by malice. Instead, they occur due to how easily these systems can be steered off course. If your organisation deploys agents without rigorous purple-teaming, this is the single biggest operational red flag. It also lacks tight tool controls and real-time monitoring.
- Explosion of Hyper-Realistic AI-Enabled Fraud & Social Engineering
AI has democratised deception at scale. Deepfakes, voice cloning, and perfectly personalised phishing are no longer edge cases; they are the default attack vector.
Key red flags:
- AI-powered romance scams, refund fraud via voice bots impersonating customers, and vendor invoice manipulation (BEC attacks turbocharged by generative models).
- Retailers reporting thousands of daily AI-bot calls; Pindrop estimates 30%+ of fraud attempts are now AI-generated.
- The erosion of trust signals: old tells like bad grammar are obsolete; attackers produce flawless, context-aware content.
In 2026, any process relying on voice/video identity verification without multi-factor biometric and behavioural checks is dangerously exposed.
- Atrophy of Human Critical Thinking & Over-Reliance
Gartner has raised the alarm. Through 2026, dependence on generative AI will accelerate the atrophy of critical-thinking skills. This will push many organisations to introduce “AI-free” assessments for key roles.
This manifests as:
- Automatic acceptance of AI outputs without verification (“lazy thinking”).
- Teams that can no longer reason independently when models hallucinate or drift.
- A cultural shift where speed trumps scrutiny, amplifying downstream risks in decision-making, code review, and strategy.
The paradox is stark: AI augments capability while quietly deskilling users. Watch for teams that treat model suggestions as gospel rather than hypotheses.
- Identity & Trust Erosion in an Agentic World
Identity is becoming the defining battleground. AI erodes traditional trust signals, enabling flawless impersonation of humans and non-human entities (e.g., short-lived tokens, MCP-connected agents).
Red flags:
- Surge in AI identity threats: forged credentials triggering cascades of automated actions.
- Non-human identities are multiplying faster than governance can track.
- Privacy lawsuits and breaches from always-on agents capturing unintended data.
A single deepfake or hijacked agent can authorise transactions or exfiltrate data. In such cases, identity systems that have not evolved beyond passwords + MFA are critically vulnerable.
- Inauthenticity Backlash & the “That’s AI” Reflex
Consumers are developing a sharp nose for synthetic content. The phrase “That’s AI” is now slang for “I don’t believe you”. It signals distrust in anything that feels too polished, generic, or fabricated.
For brands and creators:
- Over-reliance on generative tools risks making output indistinguishable from competitors’ AI slop.
- Audiences increasingly value “messy,” human signals (behind-the-scenes, imperfections) over perfection.
This cultural shift punishes lazy AI adoption while rewarding authenticity, a subtle but powerful market signal in 2026.
- Persistent Structural & Existential Shadows
Underlying concerns have not vanished:
- The energy and environmental footprint of training/inference continues to climb.
- There are ongoing debates around algorithmic bias and misinformation at scale. There are existential risks from misaligned systems, especially as frontier labs push toward AGI ambitions.
- Regulatory fragmentation (EU AI Act in full swing, U.S. lags behind in creating arbitrage).
Organisations ignoring governance, explainability, and risk-management programs are flying blind into a tightening compliance environment.
In short, 2026 is the year AI stops being mostly upside and starts revealing its full two-sided nature. The technology is not failing; it is succeeding so quickly that our controls, institutions, and habits have not caught up. The organisations that thrive will be the ones that treat these red flags as daily operational priorities. They do not view them as abstract warnings. Ignore them, and the costs (financial, reputational, and societal) will compound fast.
And if you believe that, you’ll believe anything.
Many thanks for reading.
😺 Click for the last 100 Good Strat articles 😺
Discover more from GOOD STRATEGY
Subscribe to get the latest posts sent to your email.