Back to Blog

Who Gets to See the Prompt?

In shared workplace AI systems, disclosure is not always a personal choice. Sometimes visibility is built into the workspace itself.

A lot of workplace AI use happens in small, ordinary moments.

You ask a chatbot to help rewrite an email. You paste in a messy paragraph from a report. You ask it to explain a concept you feel like you should already understand. You try five versions of a prompt before getting something useful. You use AI to think through a conflict, summarize meeting notes, debug code, or turn half-formed thoughts into something coherent.

These interactions can feel private, even intimate. Not because they are always deeply sensitive, but because prompts often reveal the process behind the work: confusion, uncertainty, curiosity, shortcuts, drafts, mistakes, and the questions people may not want to ask out loud.

But increasingly, people are not using AI alone.

I have had my own small version of this feeling: realizing after the fact that I put more work context into an AI prompt than I meant to. It was not dramatic, and nothing bad happened. But it made the boundary feel suddenly visible. The tool felt like a private scratchpad until I remembered that accounts, workspaces, logs, and organizational policies sit around the conversation too.

Generative AI tools are being adopted inside shared organizational environments: team workspaces, enterprise accounts, admin dashboards, shared projects, collaborative chat histories, and internal AI platforms. In these settings, AI use may not be fully individual. Prompts, outputs, files, usage patterns, or metadata may be visible to someone else: a teammate, a manager, an administrator, or the organization itself.

AI disclosure is no longer only a personal decision. In workplace AI systems, disclosure can become infrastructure.

Once AI use happens inside workplace infrastructure, disclosure shifts from a moment of personal admission to a question of system design: what gets logged, who can access it, and how clearly any of that is explained to the people using the tool.

Previous conversations about AI at work often ask whether people choose to disclose that they used AI. Did they tell their manager? Did they cite it? Did they admit that a draft, analysis, or line of code was AI-assisted?

But in shared AI workspaces, disclosure is not always a choice. Sometimes visibility is built into the system. Sometimes it is unclear. Sometimes people simply do not know who can see what.

Uncertainty itself can change how people use the tool.

What Might Be Visible?

A prompt can be more than an instruction. It can be a trace of how someone thinks. It can show what they are struggling with, what they are responsible for, what they do not know yet, what they are trying to automate, and what kind of help they need.

  • prompts
  • outputs
  • uploaded files
  • chat histories
  • timestamps
  • token counts
  • tool choices
  • usage dashboards

Even usage metadata can tell a story. How often someone uses AI. When they use it. Which tools they use. Whether they generate long documents, summarize sensitive materials, or repeatedly ask for help with the same task. In a workplace, these traces can easily become signals of productivity, competence, dependence, creativity, or risk.

That does not mean organizations should never have oversight. Workplaces have legitimate reasons to care about data security, compliance, privacy, and responsible AI use. But there is a difference between governance that builds trust and monitoring that makes people feel evaluated every time they ask a question.

Visibility Changes the Tool

  1. Just me The AI feels like a private scratchpad for messy thinking, drafts, and trial runs.
  2. My team The same prompt starts to feel collaborative, but also more performative.
  3. My manager Usage can become evidence: of productivity, dependence, experimentation, or risk.
  4. My organization AI use becomes part of infrastructure, policy, metrics, and workplace governance.
  5. IT or admins Visibility may be framed as maintenance, security, or compliance, but it can still feel like surveillance.

This is where workplace AI gets complicated.

If employees believe their prompts might be visible to managers, they may avoid asking certain questions. If they think teammates can see their usage, they may perform competence instead of experimenting. If dashboards, token counts, leaderboards, or productivity metrics are introduced, people may start using AI strategically: not necessarily in the way that helps them think best, but in the way that looks best.

Visibility can shape behavior before anyone is formally punished or rewarded.

Managers Are Users Too

I also think the manager view matters here. What can managers actually see in these systems? What do they usually look at? Do they understand the trade-offs that come with making AI use visible? A dashboard that looks useful for adoption tracking may also make employees feel watched while they are still learning how to use the tool.

The same question applies to other roles, especially IT staff, administrators, team leads, and upper management. Visibility can travel sideways and upward in ways users may not expect. Managers may also have their own version of the concern: are they comfortable with their AI use being visible to other managers, senior leadership, or IT?

Small interface probes can help surface what people assume, what they notice, and what makes them uncomfortable.

People may censor their prompts. They may avoid sensitive but legitimate tasks. They may move work into personal accounts. They may stop using AI for early-stage thinking. Or they may overuse it because the organization seems to reward visible AI adoption.

I am also interested in boundary crossing: the messy moments when people accidentally or intentionally put personal information into workplace tools, or use a personal AI subscription for work because it feels faster, safer, more familiar, or less monitored. Those moments are not only rule violations or user mistakes. They reveal where people think the boundaries are, where the tools make those boundaries blurry, and what people are trying to protect.

The Research I Am Starting

I am doing research on this: how people understand visibility, monitoring, and surveillance in collaborative workplace AI systems. I am interested in what workers believe is shared when they use AI at work, how managers and administrators understand the visibility they are given, and how all of this changes behavior.

The question is not only: do people use AI?

It is also: what do people think AI use reveals about them? And who do they think is watching?

Awareness What do users believe is visible or shared in joint AI workspaces?
Perception How do users perceive risks related to monitoring, peer visibility, and usage metrics?
Behavior How does perceived visibility affect prompt content, usage frequency, and avoidance?
Governance How do policies, dashboards, metrics, and leaderboards shape trust and usage?
Oversight What can managers, IT staff, and admins see, and how do they interpret the trade-offs?

For this project, I plan to conduct semi-structured interviews with roughly 25 to 35 people who use or oversee AI tools in workplace settings. I am especially interested in people with experience using shared or enterprise AI systems, but I also want to include people who are unsure whether their AI use is visible. That uncertainty is part of the problem.

The interviews will focus on how participants understand what is shared in these systems: prompts, outputs, files, chat histories, and usage metadata. I will ask what they know about organizational policies and monitoring practices, and how they feel about different forms of visibility, such as peer access, managerial oversight, administrator access, and organization-wide reporting.

I also plan to use simple scenarios and mock interfaces during interviews. For example: imagine an AI workspace where only you can see your prompts. Now imagine your team can see shared AI threads. Now imagine a manager receives weekly usage summaries. Now imagine a leaderboard shows who uses AI the most.

Each version changes the social meaning of the same tool.

A chatbot is not just a chatbot once it enters an organization. It becomes part of a workplace system: policies, norms, incentives, dashboards, power dynamics, and trust.

I want to understand how people reason about that system. What do they assume? What do they fear? What do they change? What do they wish had been explained more clearly?

I will analyze the interviews thematically, looking for recurring patterns in how people make sense of visibility and sharing. I am especially interested in mismatches between what people believe is visible and what systems actually make available, as well as the strategies people adopt in response to perceived monitoring.

AI governance cannot only be about telling people to disclose their use. It also has to ask when disclosure is automated, ambiguous, or quietly built into infrastructure. It has to ask whether workers understand the systems they are being asked to use. It has to ask how visibility affects experimentation, learning, privacy, and autonomy.

Because if people feel watched, they will adapt.

Those adaptations will shape what workplace AI becomes: not only which tools organizations buy or configure, but how safe people feel experimenting, learning, making mistakes, asking for help, and doing the quiet thinking that happens before work is ready to be seen.