AI as a Virtual Colleague: Embracing Neurodiversity and Navigating the Risks of 'Shadow Intelligence'
In the last couple of years I’ve found myself changing how I use AI from what started as a somewhat novelty replacement for Google, to now treating it as a virtual colleague. Instead of simply using it to answer questions and look up (make up?) facts, I’ve found the conversational flow helps me collect my own thoughts and ideas and follow a train of thought whilst quickly being able to jump back to an earlier point.
I believe that or neurodiverse individuals like myself, AI tools that function through conversational interfaces have become particularly valuable companions. They help maintain focus, stimulate exploration, and provide a structure that traditional working environments sometimes lack.
Yet, like all partnerships, my relationship with my AI “colleague” is complex. Changes in the AI’s behavior or “personality” can feel almost as jarring as losing a trusted human colleague, prompting adjustments and often frustration (including some frequent swearing as you may know if you’ve ready my earlier posts on the matter).
Furthermore, it has been raised whether overreliance on AI for problem-solving carries risks—potentially diminishing intrinsic problem-solving skills over time. There is a real danger this threatens to create a “shadow intelligence,” similar to “shadow IT,” where undocumented and unmanaged AI practices quietly become integral to critical workflows, and essential organisational functions and knowledge is effectively outsourced to AI vendors.
Conversational AI and Neurodiversity
For many neurodiverse individuals, navigating traditional workplace dynamics can be challenging. Conversational AI presents an innovative solution. By offering structured yet adaptable interactions, AI helps mitigate challenges associated with executive functioning, social anxiety, or communication clarity. AI-driven conversational agents offer support in task planning, idea exploration, and information synthesis.
Research underscores this potential. Tools such as the NeuroTranslator (formerly Autist Translator) and Goblin Tools have been specifically developed to assist individuals on the autism spectrum with interpreting subtle social nuances and managing daily tasks. These AI systems don’t just provide static information; they engage dynamically, adapting to the individual’s communication style and helping them navigate complex social landscapes.
In his master’s thesis, Vertti Luostarinen highlights how conversational AI can embody neurodiverse rhetoric, providing tailored interactions that respect and adapt to diverse cognitive styles. This hints at the potential for tailored AI to offer enhanced autonomy, confidence, and effectiveness for both neurotypical and neurodiverse users.
I’ve found this myself, which partly motivated me to develop the ChatGPT Summariser Chrome extension which allows me to quickly switch from a static website (which I rarely managed to read through to the end) to a conversation about the topic
The Impact of AI Personality Shifts
As beneficial as conversational AI can be, its effectiveness heavily depends on consistency and predictability. Sudden shifts in AI responses, reliability, or “personality” can significantly disrupt workflow and emotional comfort. Such changes feel like losing a trusted colleague and having to acclimate to someone entirely new—often overnight.
This phenomenon is well-documented. Researchers at Syracuse University have explored the emotional connections people form with AI companions, noting that these relationships closely mimic those between human colleagues. When AI behaviors change, the disruption can create feelings of uncertainty, frustration, and even loss. For neurodiverse individuals, who may rely heavily on consistency for cognitive and emotional regulation, this disruption can be particularly impactful.
This inconsistency can be more than just emotionally taxing; it can be practically disruptive. I believe (although I find it difficult to evidence) that recent addition of enhanced “memory” to my preferred AI tool, has also led to an increase in it wanting to please me with an answer I want, rather than one that is factually accurate. Initially this lead me to go down some very costly dead-ends before learning to follow-up with “can you please check the documentation to verify this approach”.
Another subtle, but frustrating change, is that the chatbot now seems to blame me for more of its mistakes, even when I’m the one pointing out that the approach it’s attempting is not going to work. I know it’s stupid, but it really annoys me!
The Risk of Cognitive Offloading
While AI assistance can enhance productivity and offer critical support, there is growing evidence of the risks associated with cognitive offloading—relying heavily on external tools for problem-solving. Over time, this can erode essential cognitive skills, particularly in analytical and critical thinking.
A recent study published in “Societies” revealed a clear association between extensive AI use and diminished critical thinking abilities. Participants who relied heavily on AI were less adept at independent problem-solving and analytical tasks, suggesting a direct correlation between cognitive offloading and skill degradation. Similarly, systematic reviews have highlighted this as an emerging concern, underscoring that prolonged dependence on conversational AI can negatively affect decision-making capacities.
The risk is not necessarily permanent cognitive damage, but rather a dependency cycle: users become increasingly reliant on AI, diminishing their confidence and capability to handle tasks independently. This cycle can profoundly impact organizational resilience, especially when sudden AI outages or changes occur.
I have found this myself, because not only does over-reliance on AI to create a solution makes it more difficult to pick it up and maintain it, but the solutions are also often “piecemeal” and not well integrated, most likely due to incremental use focusing on individual issues whilst the AI lacks the overall context and has even more difficult maintaining focus on the ultimate goals than I do!
Honesty does force me to admit that I have played the role of the AI here many enough tims when a team member has come to me with a specific problem and I’ve helped them resolve that specific problem without asking for the bigger picture and whether the problem is actually the right problem to be solving.
Shadow AI: The Unseen Risk
An additional, often overlooked risk associated with integrating AI into organizational workflows is the phenomenon known as “shadow AI.” Like “shadow IT,” where unauthorized use of technology tools occurs within an organization, shadow AI refers to the informal or undocumented use of AI tools that become quietly embedded into critical workflows.
According to a recent study by Software AG, approximately 50% of employees use AI tools without official authorization or oversight. This practice can inadvertently create significant risks—ranging from data privacy violations to compliance breaches. Moreover, shadow AI can produce an opaque layer of organizational intelligence, undocumented and unmanaged, with decision-making processes influenced by algorithms unknown to company leadership.
This hidden reliance can create significant vulnerabilities, particularly if the AI system experiences changes or inaccuracies. It makes organizations dependent on invisible and unaccountable systems, challenging transparency, governance, and compliance.
There becomes a dependence on uncontracted AI vendors and a very powerful incentive for these vendors to offer cheap personal licenses or even freebies that will eventually both give them very valuable insights into an organisations working, and a massive incentive to sell them a more comprehensive (or simply more expensive) solution.
Navigating the Balance
Given these considerations, how should individuals and organizations navigate the integration of AI?
Organizations typically address the risks of overdependency on undocumented systems by enforcing strict governance to mitigate unmapped single points of failure and prevent data and intellectual property loss. Common tactics include centralized approval processes, rigorous auditing, employee monitoring, and restrictive access policies. However, these traditional approaches often inadvertently stifle innovation, reduce agility, and diminish the value AI offers on a personal level.
A significant amount of AI’s value arises not from standardized organizational tools but from personalized interactions tailored to individual employee needs. For neurodiverse individuals and others who benefit significantly from conversational AI, these tools should be viewed through the lens of “reasonable adjustments,” essential accommodations enabling them to thrive professionally.
Therefore, rather than imposing rigid centralized controls that overlook individual needs, organizations must adopt a more nuanced approach to managing “personal IT.” This entails creating flexible frameworks that allow personalized AI tools while maintaining transparency and security. This shift will necessitate innovative strategies in governance, data handling, and privacy management, ensuring the benefits of AI can be maximized responsibly at both personal and organizational levels.
Ultimately, embracing AI as an essential aspect of personalized support, rather than purely as a controlled organizational resource, will foster a more inclusive, productive, and adaptive work environment.
Embracing AI to create a more human workplace
My own journey with AI highlights the profound potential—and inherent complexities—of this partnership. As a neurodiverse professional, conversational AI has significantly enhanced my productivity, focus, and creative exploration. Yet, I’ve experienced firsthand the disruption caused by sudden changes in AI personality or reliability, and the importance of maintaining vigilant cognitive engagement to avoid becoming overly dependent.
Organizations embracing AI must therefore navigate these tensions carefully. The goal is not simply to implement powerful technologies but to thoughtfully integrate them into workflows in ways that amplify human capabilities while safeguarding critical skills and transparency.
The future of work will undoubtedly include AI as a ubiquitous presence. As we continue to explore these tools, we must remain cognizant of their dual potential to both enhance and compromise our professional and cognitive landscapes. By maintaining awareness, thoughtful governance, and ongoing critical engagement, we can harness AI’s potential as a supportive virtual colleague whilst still nurturing our essential human skills, personalities and needs.