Microsoft security tools questioned for treating employees as threats

79 points by Dotnaught 10 months ago | 29 comments
  • Nerada 10 months ago
    "Employee surveillance" sounds a lot more nefarious than the reality of these systems for most organizations.

    Your network admin has had access to the proxy, and by extension, all your browsing history since forever. Now, your UEBA does that, but mainly just sits there and flags things like a user normally hitting a single host to suddenly hitting 300 hosts on the network, or a user having an average data upload of 500MB/week to 200GB in a single session.

    Very few people care if you're using the corporate network to listen to YouTube Music (or even looking for other jobs), most just want to be notified of data exfiltration, compromised accounts, or malicious network activity.

    • M95D 10 months ago
      It's not that simple. The more data points there are, the stricter the rules can get. A user having an average data upload of 500MB/week would be flagged at 50 MB in 5 minutes. I expect that the system self-adapts until it flags at least some users and I expect those flagged are not the technologically challenged users that browse anywhere and click anything, but the most competent users that perform "weird" actions such as ping, ftp, open the command prompt, etc.
      • 10 months ago
        • nextaccountic 10 months ago
          > Very few people care if you're using the corporate network to listen to YouTube Music (or even looking for other jobs),

          In many jobs, you will be fired on the spot for looking for other jobs

        • Animats 10 months ago
          This sort of thing that makes me miss the classified world.

          Counterintelligence people definitely view employees as risks. But they're not your boss. They work for a different organization entirely. They're watching your boss, and your boss's boss, too. They only care about threats to national security. If they find other things, they log them, but don't tell your management. They have nothing to do with performance evaluation. The three-letter agencies worked out the rules on this stuff decades ago.

          • Jerrrrrrry 10 months ago
            I've seen punny/phishy names being used Onboarding->Contact List->Phishing pivots.

            Hernan R Resources, Sam S Upport, etc are the worst ones; the best of them I've pridefully jotted down.

            WFH magnifies this attack vector; the internet didn't seem to name it yet, but I'm sure "onboardishing" is gonna be in the annual security training in a few years.

            • justinclift 10 months ago
              It's not super clear what you're meaning.

              What's the thing that's happening, that you're talking about?

              • Jerrrrrrry 10 months ago
                Rushed on-boarding practices due to WFH policies and labor "shortages" have lead to a new-ish attack vector of fake employees (often with adversarial-themed punny names) that basically exist only to farm internal emails, contact list, and a infranet-wide phish campaign - often using the punny name as a double entendre to establish an authoritative reply address, all inside the company email/network.

                First name, middle initial, last name -> Ian Thomas Support => itsupport@example.com

          • dugite-code 10 months ago
            If you have paid any attention to cyber security... well anything in the last 5-10 years this should be expected?

            "Insider threats" are typically the one group that any security firm can actually do anything about in an active manner. Every other threat group comes at you, not the other way around.

            • zdragnar 10 months ago
              Pretty much any security training will identify insiders, malicious or not, as a vector.

              That's the whole point behind phishing attacks against corporate employees. Back in the day of windows auto-playing CDs and USB files, dropping random official looking drives around the parking lot was a thing: https://www.wired.com/2011/06/the-dropped-drive-hack/

              Then you get into data exfiltration by employees who were bribed, etc.

              • TeMPOraL 10 months ago
                In other words: security firms are like drunkards looking for their lost keys under the nearest street light instead of where they lost them, because it's easier to see under the street light.

                Having paid attention to cyber security over the past decade, this tracks.

                • 10 months ago
                  • night862 10 months ago
                    No, it is due to the game theoretic phenomenon called "The Attacker's Advantage" which is usually interpreted to simply mean "An attacker has to only succeed once, defenders must succeed every time." but in general refers to the strategic landscape of infosec.

                    Frankly, the thought of "Actively Coming At" infosec threats is personally appalling.

                    Feelings aside, you couldn't "actively" "come at" APT actors from the future with unknown techniques, and the way to "actively come at", so to speak, entire categories of "cyber threats" would be to fund detailed white box security testing on your IoT devices or VoIP handsets, for example.

                    Much cheaper to oppress your workforce. At least that will shorten the checklist. Worst case scenario, you can catch the next wave of offshoring if you push it too far and they unionize.

                  • hsdropout 10 months ago
                    Agreed. This is also a feature of Microsoft's Purview product:

                    https://learn.microsoft.com/en-us/purview/insider-risk-manag...

                  • crvdgc 10 months ago
                    > Both suggest targeting "disgruntled employees" and those with bad performance reviews as potential insider threats – Forcepoint even mentions "internal activists" and those who had a "huge fight with the boss" as risks.

                    > Forcepoint offers to assess whether employees are in financial distress, show "decreased productivity" or plan to leave the job, how they communicate with colleagues and whether they access "obscene" content or exhibit "negative sentiment" in their conversations.

                    This far surpasses the normal surveillance, which is more technical in nature. It's trying to combine mind reading and minority report to enforce a Stalinist level of thought control. How much can be delivered in reality remains to be seen, though.

                    • duxup 10 months ago
                      These also seem to have a weird stacking sort of nature. Once a bad review, someone says they had a "fight" with their boss ... now there's some system that brands them as a threat, their options for advancement are limited, but maybe they don't know why...

                      After a while it's a self fulfilling prophecy. All because of a couple of perceived issues.

                      • hulitu 10 months ago
                        > This far surpasses the normal surveillance, which is more technical in nature.

                        Looks more like fascism.

                        • night862 10 months ago
                          If it were the government, it would be quite.
                      • SoftTalker 10 months ago
                        A good way to avoid malicious insiders is to pay well enough that employees won’t risk their jobs by violating the trust placed in them. That said, there’s a place for monitoring like this, to detect compromised accounts or malware activity.
                      • michaelmrose 10 months ago
                        Any test with a very small true positive and even negligible false positive rate risks an unreasonably high number of false positives when applied to a large population. This is especially bad with a squishy non-scientific topic.

                        If you have 50,000 employees and are screening for a risk that is 1 in 1M with a 5% false positive rate you are going to be very disappointed when over the next decade it identifies 25,000 would be shooters when you have zero actual active shooters. Even better you will probably stop disregarding such a test and miss if if it actually happens.

                        As awesome the fact that skynet is always watching will probably cause people to manage their workspace personas to a psychotic degree that will surely ratchet up workspace stress to new highs. Deprived of actual data on what triggers the eye of sauron 100 wrong theories about how to avoid doing so will proliferate and your studied population will both diverge from the norm the system was designed to operate on and become progressively worse.

                        A few years later a study will prove that the AI inadvertently learned to discriminate against minorities, women, or people in other time zones through things the training population did without thinking and the people pushing it will look like bigots. Instead of ejecting we will try to fix it. Either this doesn't work or if it does people accuse skynet of being woke.

                        • hex4def6 10 months ago
                          I question you "1 in 1M". I think it's probably multiple orders of magnitude more common that that. There is a continuum of how damaging that data leakage is, and/or who the data is flowing to.

                          Just through personal anecdote, I can think of maybe 5 examples of people I directly know 'retaining' stuff like outlook email PSTs, training documents, etc. And I'm sure there are many more people in my immediate circle who have done similar stuff.

                          The espionage situation is perhaps more rare, but still prevalent. It's a common situation for people to hang around outside the Foxconn campus soliciting data for the latest iphone / what-have-you. Lots of money to be made have a 2-3 month jumpstart on your competitors when you design a 3rd-party case, for example.

                          • michaelmrose 10 months ago
                            The more subtle the problems the more ridiculously poor the AI would be at catching it.

                            I question the chances it will be any better at catching the fellow who comes back in with the AR than reasonable management.

                            Insofar as catching people who are simply disloyal I would count it worthless.

                            At best it might make a bad guess at which employees are presently upset or dissatisfied until it becomes obvious how to fake it by which time it has become easy to guess who hates you because living under the eye of sauron it will be 90% of your staff!

                        • chris_wot 10 months ago
                          How do they know an employee is in financial distress? Because they company pays them peanuts?
                          • hulitu 10 months ago
                            Telemetry ? They know everything you do at work and at home.