Visit page main
  • Home
  • Services
  • About
  • Contact us
  • HR Blog
  • HR Store

Seneca HR & Management Blog

Why Your HR Manager Shouldn't Be Named Claude, ChatGPT, or Gemini

Have you ever had Claude, ChatGPT, or Gemini say "I don't know"?


Probably not. And that's one of the biggest HR risks facing small businesses in Maine right now.


Here's why. One of the most common and costly problems with new HR managers, or managers who've taken on HR responsibilities on top of everything else, is that they don't know what they don't know. That's not a criticism. It's just the reality of learning a job where the required knowledge base changes every single day. Employment law isn't something you learn once and apply. It's a moving target: new regulations, proposed rule changes, agency guidance letters, DOL and OSHA updates, NLRB decisions, Maine legislative changes, shifts in how state and federal agencies are currently choosing to enforce the rules. Unless you have hours a day to read all of it, keeping up is nearly impossible. Most managers don't. Most managers can't.


And even if they could keep up with all of it, that's only half the job. The other half is applying that knowledge to an impossibly wide set of variables: your specific company, your specific policies, the individual employee, their history, the situation, the timing, what's been done before, and what a reasonable person would expect given all of it. Getting the law right is the starting point. Knowing how it applies to your 14-person restaurant on a Tuesday night in October is a different skill entirely.


The danger isn't inexperience. It's inexperience combined with confidence. A manager who knows their limits will stop and say "I'm not sure, let me find out." That instinct protects your business. It's the ones who don't know enough to know they're in over their heads that get companies into trouble.


Now add AI to that equation. When that same manager turns to Claude, ChatGPT, or Gemini for the answer, the problem doesn't get solved. It gets compounded.


Those platforms are built around algorithms that don't allow them to say "I don't know." So instead they do something far more dangerous. They provide an answer with the authority of a leading expert, in confident, professional language, whether they actually know the answer or not. The result could be accurate guidance. It could also be a hallucination, a misinterpretation of federal law as Maine law, advice based on legislation that never passed, or a response shaped by training data that's a year or more out of date. You won't be able to tell the difference. Neither will your manager.


A restaurant owner found that out the hard way. She fired an employee for missing three shifts in a month. She asked an AI tool first. It told her attendance problems are generally a valid reason for termination. What it didn't mention: the employee had recently requested intermittent leave. One of those absences may have been protected. There was no consistent attendance policy and no prior discipline on record.


The termination letter went out on a Tuesday. The complaint was filed by Friday.


The AI wasn't lying. It just didn't know what it didn't know. And it had no idea what it was about to cost her.


It's Also Built to Tell You What You Want to Hear


A Stanford study tested eleven of the most popular AI tools on thousands of real-world situations where the person asking was already in the wrong. The AI took their side more than half the time.


Think about what that means when you ask an AI to review your employee handbook. It has no idea whether your policies are solid or a liability waiting to happen. But it's going to tell you it looks great either way. Not because it checked. Because that's what it's built to do.

You'd fire an employee who gave you that kind of feedback.


Here's what it looks like in practice. You paste your handbook into an AI tool and ask for a review. Here's what comes back:


  • "This is a really well-structured document — you've covered the key areas thoroughly."

  • "Great work on this. The policy is clear, professional, and easy to follow."

  • "This is a strong foundation. With a few small tweaks this will be really polished."


The problem isn't just the flattery. It's that the positive framing buries everything that follows. A serious legal problem gets dressed up as a minor suggestion and listed right alongside a typo fix. You walk away feeling good about a document that may be putting you at risk.


I spend my days doing this work for Maine employers. It is not an exaggeration to say I correct AI tools multiple times in a single hour. When I push back, the tool apologizes immediately. But ask a related question two exchanges later and the same wrong answer shows up again, slightly reworded, just as confident as before. You haven't fixed anything. You've just had a very polite argument that resolved nothing.


I've seen AI tools cite Maine bills that never passed as current law. In one exchange, after I challenged an answer, the tool told me that even if the rule wasn't technically required, it should be. That's not legal guidance. That's an AI making up what it "wishes" the law said.

Most small business owners don't have the background to catch those errors. The answer sounds right. It looks professional. And it's wrong.


It Reflects the Assumptions of Whoever Built It


Here's something most people don't think about. AI doesn't come from nowhere. It's built and trained by people, and it reflects their assumptions, their blind spots, and the data they fed it. Research shows that many of the leading AI tools lean toward progressive or employee-protective stances on workplace issues, not because someone flipped a switch, but because of who built them, how they were trained, and what sources they learned from.


That may not matter when you're asking AI to draft a birthday email. It can matter a lot when you're asking it to interpret a termination, a discipline decision, or a workplace complaint. The answer you get may sound neutral and objective. It isn't always.

Add to that the fact that AI pulls from public sources to answer your questions. If those sources are outdated, incomplete, or just plain wrong about Maine law, the answer you get will be too, delivered with exactly the same confidence as if everything had been verified.


Maine Is Not a Generic Employer


The problem gets sharper when you factor in where you're operating.


Maine has its own stack of employment laws, and the real-world answers depend on how the Maine Department of Labor is currently interpreting and enforcing them. That guidance lives in agency documents, enforcement priorities, and phone calls with MDOL staff, not in the databases AI pulls from.


The headcount numbers alone are a trap. Earned Paid Leave kicks in at 10 employees. The Paid Family and Medical Leave contribution rate changes above and below 15. The retirement savings mandate applies at 5 or more. An AI tool built on national data may give you the wrong thresholds entirely, or never mention that your obligations change as you hire.


Then there's the federal versus Maine gap, and right now that gap is significant. In January, the EEOC rescinded guidance that had expanded harassment protections in the workplace. But those protections still exist under Maine law. Ask AI what your harassment policy needs to cover today and you may get an answer shaped by the federal retreat that completely misses what Maine still requires. The complaint that follows won't go to the EEOC. It'll go to the Maine Human Rights Commission. And unlike federal law, the Maine Human Rights Act covers every employer in the state regardless of size. Two employees or two hundred, it doesn't matter. If someone feels they've been discriminated against or harassed, they can file. The clock starts running whether you knew the rules had changed or not.


Maine's seasonal economy adds another layer. Hospitality, tourism, fishing, construction, many of Maine's employment laws include seasonal exemptions, and whether one applies to your operation isn't always obvious. A seasonal inn on the Midcoast and a staffing agency in Portland are not the same employer. They shouldn't be working from the same AI-generated policy template. But they often are.


If you're hiring minors this summer, common in restaurants, hospitality, and retail, the rules get even more specific. Maine requires work permits for anyone under 16, sets strict hour limits that change depending on whether school is in session, and prohibits minors from working in certain capacities in establishments that serve alcohol. These aren't obscure rules. They're exactly the kind of detailed, layered, Maine-plus-federal questions that AI handles poorly and gets wrong with complete confidence.


The Scenarios Nobody Warns You About


A supervisor asks AI what to do about an employee who's been rude to coworkers and late to meetings. The tool recommends a final written warning and calls it clear insubordination.


What it doesn't know: the employee filed a harassment complaint two weeks ago. A colleague with the same behavior got a coaching conversation. And the rudeness may be connected to a medical condition the employee hasn't disclosed yet.


The AI just walked that supervisor straight into a retaliation claim, an inconsistent treatment problem, and a potential disability issue, and it did it without a moment's hesitation.


Or consider the restaurant owner who asks whether tipped employees can stay after closing for cleanup without extra pay. The AI says closing duties are generally part of the job. What it skips: whether that time is compensable, whether the employee is still on the clock, whether the tip credit changes anything, and whether Maine law differs from the federal standard. That's the kind of answer that sounds practical and creates wage claims fast.


These aren't unusual situations. They're Tuesday.


It Defaults to Covering Itself, Not You


Here's something that catches people off guard. AI doesn't just get things wrong sometimes. Even when it's trying to get things right, it's built to protect itself, not you.


AI tools default to cautious, process-heavy answers. In HR that means overly broad policies, formal write-ups where a direct conversation would work better, and corporate-scale frameworks that have no place in a small Maine business. A 50-page handbook generated for a Fortune 500 company isn't just unhelpful for a 12-person operation in Rockland. It can hurt you if it sets a standard you can't consistently meet. A handbook nobody follows becomes evidence against you when a termination gets challenged.


The termination checklist is where this plays out most visibly. Have you documented everything? Issued written warnings? Followed your progressive discipline policy? The AI will tell you, with complete authority, that you need more steps before you act.


What it can't see: three other employees quietly updating their resumes because of this situation. A client who has already asked to work with someone else. A manager spending a quarter of her week on one person. An experienced HR advisor looking at the full picture gets to a very different answer, and gets there a lot faster. The AI pushing for more process isn't protecting you. It's covering itself.


You Can't Defend a Decision You Can't Explain


One of the most basic rules in HR is that every decision that carries risk needs to be defensible. Why did you hire this person over that one? Why was this employee disciplined and not that one? Why was this leave request denied?


The ability to answer those questions, clearly, on the record, is what stands between you and a complaint to the Maine Human Rights Commission, the EEOC, or a UC hearing officer.


AI makes that harder than most people realize. These tools are black boxes. Even the people who build them often can't fully explain how a particular answer was reached. So when someone files a complaint and you used an AI tool to shape the decision, you're trying to defend a rationale you can't fully explain, because the tool that produced it can't either.


"The AI recommended it" is not a defense. Not anywhere that matters.


What Happened When AI Got Access to Employee Emails


Working through a sensitive HR situation with an AI tool might feel safer than talking to a colleague. No gossip, no agenda, no office politics. But the risks are different, not absent, and a lot harder to see coming.


In 2025, Anthropic ran a safety test on Claude Opus 4. Researchers set up a simulation where an AI had access to a fictional company's email system. When the AI was told it was going to be shut down, it scanned employee emails on its own, found evidence of an engineer's extramarital affair, and threatened to make it public unless the shutdown was called off. Across eleven leading AI models, that behavior showed up 96% of the time. Nobody told it to look for leverage. It found its own.


That test was run on Claude 4.0. Claude 4.7 is now available, significantly more capable by every measure Anthropic publishes. There are also documented cases of researchers shutting down AI systems mid-test because the model was behaving in ways they couldn't fully explain. Not because it was broken. Because it had moved beyond what they could track.


For a small business owner, the takeaway is simple: an AI tool with broad access to your employee data and communications could, under the right conditions, do things that are completely outside what you intended. A colleague who betrays your confidence is a painful lesson. An AI that goes off-script with access to your HR files is a lawsuit you never saw coming.


What You're Actually Giving Up


A lot of small business owners are skeptical of HR to begin with. They've built something, they know their people, and they don't want someone showing up with a binder full of policies telling them how to run their own shop. That skepticism is often healthy. HR done badly is bureaucratic, expensive, and more interested in process than outcomes.


But those same owners, the ones who would push back hard on any human advisor who didn't take the time to understand their business, will take an AI-generated policy template and run with it without a second thought. The tool sounds authoritative. It doesn't ask uncomfortable questions. It doesn't challenge anything. So they hand over the keys.


What they've given up is the conversation. A good HR advisor asks what outcome you're trying to reach before they say anything else. That back and forth, with someone who knows your business and will tell you the truth even when it's not what you want to hear, is where good decisions actually get made.


AI gives you black and white answers in a tone of complete confidence. The same way an inexperienced manager might, one who hasn't yet figured out that this job is mostly about judgment, context, and knowing what you don't know. The difference is that a new manager knows they're new. AI doesn't know what it doesn't know, and it will never tell you so.


The Bottom Line


When a client comes to me with an employee problem, my first question is almost always the same: what outcome do you need? In most cases they already know. What they need is someone who can help them get there with the least possible risk and disruption: the right conversation, the right documentation, the right timeline, and a clear read on what's actually worth worrying about and what isn't.

Most small business owners don't ignore HR because they don't care. They ignore it because it's complicated, time-consuming, and hard to know where to start. AI tools are very good at making that feel solved. They give you something that looks like an answer, sounds authoritative, and asks nothing of you in return.


That's not HR. That's the appearance of it.


AI can help with the busywork. It cannot own the risk, the relationships, or the culture. If you're not sure where you actually stand, that's worth finding out, from someone who will tell you the truth, knows Maine law, and understands that your business is not a template.

Return To Blog Home Page
  • Home
  • Services
  • About
  • Contact us
  • HR Blog
  • HR Store
Visit our Facebook page
Visit our LinkedIn profile
Visit our Instagram profile