Is privacy being traded away in the name of innovation and security?

Is privacy being traded away in the name of innovation and security?

Just a couple of weeks ago, International Privacy Day passed with the usual fanfare as companies, organizations, and governments seized the opportunity to push their sound bites highlighting the importance of making privacy paramount. But I see an irony in all the noise — much of it lip service to the need to boost vigilance in the face of the artificial intelligence boom — as many of these same entities are also asking people to trade privacy and sometimes security for convenience.\n\nWe\u2019ve flogged the idea that generative AI is a game-changer to death (which it absolutely is) and we\u2019ve discussed the need to harness its implementation within the rubric of protecting intellectual property. However, we don\u2019t talk so much about how we go about protecting individual users who may not realize that every question asked, every scenario posited to the large language models (LLM) that drive generative AI is feeding the beast with that information. And will the new OpenAI GPT Store, which allows users to create their own iteration of ChatGPT, be exciting and dangerous at the same time?\n\n\u201cAs exciting as this is for developers and the general public, this introduces new third-party risk since these distinct \u2018GPTs\u2019 don\u2019t have the same levels of security and data privacy that ChatGPT does,\u201d Nick Edwards, vice president of product management at Menlo Security, tells CSO. \u201cAs generative AI capabilities expand into third-party territory, users are facing muddy waters on where their data is going.\u201d\n\nShadow AI — generative AI being used by members of an organization without the knowledge of IT professionals — \u00a0is already a reality. Are CISOs prepared for their users to jump on this bandwagon in increasing numbers?\n\nEmployee monitoring and surveillance is a potential privacy violation\n\nMy favorite topic (probably because I\u2019ve been engaged in counterintelligence and counterespionage for more than 30 years) is insider risk management (IRM), which is a continuation of the spy craft skillset. Recent developments find me going back to the core principles of that training.\n\nFifteen to 20 years ago, working from home was a rare novelty, but in 2024 it is perfectly commonplace. Now, when putting together a \u201cmonitoring package\u201d to protect an entity\u2019s assets, does a CISO consult with the legal team? Is there a meeting of the minds and a clear delineation of where monitoring becomes surveillance? Are the individual\u2019s privacy rights being trumped by the need to see what the employee is doing in the name of security?\n\nMichael Brown, vice president of technology at Auvik, has it right in my opinion: \u201cOn one end of the spectrum, monitoring an employee\u2019s every action provides deep visibility and potentially useful insights, but may violate an employee\u2019s privacy. On the other hand, while a lack of monitoring protects the privacy of employee data, this choice could pose significant security and productivity risks for an organization. In most cases, neither extreme is the appropriate solution, and companies must identify an effective compromise that takes both visibility and privacy into account, allowing organizations to monitor their environments while ensuring that the privacy of certain personal employee data is respected.\u201d\n\nThe key word in Brown\u2019s observation is \u201ccompromise\u201d and I am going to add \u201ctransparency.\u201d Employees who understand why and how their engagement is being monitored, and how that monitoring may indeed turn into surveillance when probable cause exists, will have a greater understanding of the need to protect the entity as a whole by monitoring all who engage.\n\nCollecting data comes with an obligation to protect data\n\nThe adage is that if you collect it, you must protect it. Every CISO knows this, and every instance where information is collected should have in place a means to protect that information. With this thought in mind, John A. Smith, founder and CSO of Conversant, proffered some thoughts which are easily embraceable:\n\nSmith\u2019s comment about changing paradigms piqued my interest and his expansion is worthy of taking on board, as a different way of thinking. \u201cSystems are generally open by default and closed by exception,\u201d he tells CSO. \u201cYou should consider hardening systems by default and only opening access by exception. This paradigm change is particularly true in the context of data stores, such as practice management, electronic medical records, e-discovery, HRMS, and document management systems.\u201d\n\n\u201cHow data is protected, access controls are managed, and identity is orchestrated are critically important to the security of these systems. Cloud and SaaS are not inherently safe, because these systems are largely, by default, exposed to the public internet, and these applications are commonly not vetted with stringent security rigor.\u201d\n\nLimiting access to information can also feed security issues\n\nPerhaps I am an anomaly, but when I go to a website and want to read an organization\u2019s whitepapers or research and am asked to provide identifying information to do so, I tend to close the browser and move along. If I really am interested, and there is no other way to obtain it, I will begrudgingly fill out the form to get the download. If I have a generic web-based email account, I am often rejected with an admonishment that this information is only for those with proper \u201cbusiness\u201d accounts. Marketing seems to stand between spreading knowledge and feeding a sales funnel.\n\nResearchers and vendors putting research behind a registration wall limits the spread of that same information to those who might be able to put it to best use. I might be a freelance writer who isn\u2019t going to buy your product, but I might write about the research if I could read it without compromising my privacy.\n\nIf I have one overarching message here, it\u2019s that it is important to think of privacy beyond the individual user, to consider it within the context of your ecosystem and total population, from users to customers to partners and beyond. Compliance is important, yet as we know within the security world, being compliant does not equate to being secure. By the same token, being compliant in the privacy world doesn\u2019t always ensure privacy.