Model Context Protocol
MCP is a security nightmare!
Don’t get me wrong — at Wexa, we see Model Context Protocol as the HTTP of the GenAI era. It’s the missing link to make AI assistants truly agentic and interoperable.
But here’s the reality:
MCP doesn’t govern security by default. -> Tool poisoning attacks are a cakewalk if you’re not explicitly validating or sanitizing inputs. Anyone can sneak in malicious instructions through tool descriptions or context — and the model will just obey.
Versioning is a mess. -> There’s no clear standard for how models or tools should handle different MCP versions. That means you risk breaking tools silently — or worse, running outdated logic with no warning.
Persistent context = persistent risk. -> If your AI coworker remembers things across sessions, what happens when the context itself is compromised or manipulated?
We’re bullish on MCP — it’s foundational to how we’re building secure, scalable AI agents inside Wexa.
But let’s be clear: if you’re using MCP in production today, you’re likely exposed unless you’ve already wrapped it in a tight security layer.
MCP is powerful. But don’t assume it’s safe out of the box.