By Allison Grande

Law360 (July 28, 2023, 10:06 PM EDT) — California and Colorado regulators’ approaches to enforcing state privacy laws that took effect earlier this year will attract much attention in the coming months, along with what guardrails will be put around the growing use of artificial intelligence and the continued scrutiny of how companies are deploying online tracking technologies.

Here, privacy and cybersecurity attorneys weigh in on some of the top issues to pay attention to in the closing months of 2023 and beyond.

State Privacy Law Enforcement to Heat Up

During the first half of 2023, major consumer data privacy laws took effect in California, Virginia, Colorado and Connecticut, while Texas, Oregon and several other states put their own laws on the books. The rest of the year will be spent getting up to speed with these newly enacted statutes and monitoring how regulators wield their new powers to police companies’ privacy practices, attorneys say.

“Companies have been worried about getting ready for these enforcement deadlines and making sure everything is up-to-date and buttoned up behind the scenes, which includes taking steps such as updating their privacy notices and vendor agreements,” said Aaron A. Ogunro, an attorney in the technology transactions and data privacy practice group at Polsinelli PC.

How the California Privacy Rights Act and Colorado Privacy Act are enforced will be of particular interest, given that the laws are the only ones of the dozen active state privacy laws to give regulators the ability to write rules to help companies comply with their new statutory obligations.

Colorado Attorney General Phil Weiser completed this task ahead of the July 1 implementation deadline, releasing final regulations in March that offered detailed guidance for the first time on critical topics such as data protection assessments, using personal information for profiling purposes and allowing consumers to universally opt out of the sale or sharing of their personal information, and the regulator is expected to vigorously enforce these rules.

The newly created California Privacy Protection Agency, which is the first authority in the United States solely focused on data privacy, fell short of establishing rules for 15 areas covered by the California Privacy Rights Act before the law’s Jan. 1, 2023, implementation date. Instead, the agency in March finalized regulations for a dozen of these topics, including privacy notice requirements and how to respond to browser signals that communicate consumers’ choice to opt out of the sharing of their personal data. It is also in the process of crafting rules for the three remaining topics of how to handle cybersecurity audits, risk assessments and automated decision-making.

While the CPPA was expected to come strong out of the gate when it was allowed to start enforcing the law July 1, a California state judge threw a wrench in these plans when he ruled on the eve of enforcement that the agency — which the law had directed to adopt final regulations by July 1, 2022 — couldn’t begin enforcing its new regulations until a year after they were finalized, which for the latest batch of completed rules meant March 29, 2024.

However, the ruling hasn’t foreclosed the possibility of enforcement this year. In response to the decision, the CPPA reiterated that it “remains committed to advancing the privacy rights of Californians” and vowed to “take the appropriate next steps to safeguard the protections Californians overwhelmingly supported at the ballot box.”

The agency also stressed that “significant portions” of the law’s protections “remain enforceable” starting July 1, given the judge’s holding that his decision only applies to the regulations put in place by the new privacy agency and not rules that were established by the California attorney general under the state’s inaugural privacy law, the California Consumer Privacy Act, which the Legislature enacted in 2018 and was strengthened by the California Privacy Rights Act, a ballot initiative that state voters approved in 2020.

California Attorney General Rob Bonta backed up this assertion earlier this month when he announced that his office is undertaking an “investigative sweep” of the state’s large employers over their compliance with new obligations that took effect in January to safeguard employee and applicant data. This data had largely fallen outside the reach of the state’s first privacy law, but the California Privacy Rights Act scrapped that exemption, leaving the area ripe for regulatory enforcement.

When the enforcement actions start flowing in, the approach that Bonta and the CPPA take will be telling given that the regulations leave “a lot unsaid” and regulatory actions provide a chance for additional guidance and clarity, noted Michael Bahar, a partner and co-lead of the global cybersecurity and data privacy practice at Eversheds Sutherland.

“Enforcement activity will likely allow companies to learn some more about how these regulations are going to be interpreted,” Bahar said.

In addition to gearing up for enforcement, companies will need to reevaluate and update their enforcement posture to take into account the several new privacy laws that were put in place this year, many of which take effect at some point in 2024, attorneys noted.

“There have been so many developments in such a short time, with each providing nuanced and detailed requirements that are somewhat dizzying to try to absorb in real time,” said Scott Loughlin, who co-leads the global privacy and cybersecurity practice at Hogan Lovells. “So there will be a reckoning during the second half of 2023, as companies develop reconciliation strategies and undertake what’s likely to be years of work ahead to understand how and when these laws apply and how to comply to them.”

Both businesses and privacy advocates will also continue to keep a close eye on whether federal lawmakers will finally step in to set a uniform national standard for the collection, use and disclosure of personal information.

Congress came closer than ever to enacting federal privacy legislation last year, when the House Commerce Committee easily advanced the American Data Privacy and Protection Act in July 2022.

While key House leaders have vowed to get federal privacy legislation across the finish line, lawmakers have yet to reintroduce the ADPPA or a similar comprehensive privacy proposal this year, due in large part to the long-running debate over whether a national framework should preempt more stringent state protections. As it currently stands, the prospects for enactment of such a proposal before the end of the year appear dim, although protections for a narrower subset of consumer data, such as information related to health conditions or children, could squeak through in the coming months.

“Given the movement of the ADPPA in 2022 and the growth in state laws, this failure to act at a federal level is even more striking,” noted Kirk Nahra, co-chair of the cybersecurity and privacy practice at WilmerHale.

Charting a Path Forward for AI Regulation

Since OpenAI released its popular text generator ChatGPT at the end of November, the bandwidth being dedicated by policymakers, regulators and others to the promising potential and possible risks of generative AI — which has the ability to produce clear and human-sounding text about virtually any topic — has exploded.

“The commercial launch of artificial intelligence is far and away the top cyber/privacy development of the year so far,” said David A. Straite, a partner at plaintiffs firm DiCello Levitt LLC. “We should expect continuing discussions — and surprises — related to the privacy aspects of the inputs — what information is gathered and fed into the AI — and the outputs, [meaning] how is AI used.”

The Biden administration has shown a particularly keen interest in the topic, releasing a Blueprint for an AI Bill of Rights, which lays out a voluntary roadmap for the responsible use of AI, and meeting with various stakeholders, including consumer protection, labor and civil rights leaders and the CEOs of major tech companies, to discuss topics such as how to protect the public from harm and discrimination and best practices for managing cybersecurity threats.

Earlier this month, several of these leaders convened at the White House to announce that seven leading AI companies — Amazon, Anthropic, Google, Inflection, Meta Platforms, Microsoft and OpenAI — had voluntarily agreed to several commitments to support the safe and responsible deployment of AI, including boosting their investment in cybersecurity and being more transparent about how this technology is being used.

The European Union has also made significant strides in this area. The bloc took the lead in establishing a groundbreaking consumer data protection framework in 2016 and is again blazing the trail by being on the brink of enacting the world’s first comprehensive law to tackle AI systems, which would take a risk-based, tiered approach to regulation that includes completely banning certain high-risk applications of the technology.

“The EU’s AI Act is going to bring a lot more attention and focus to how companies are using AI today and what their plans are for the future,” said Hogan Lovells’ Loughlin.

With Congress unlikely to roll out significant AI regulations in the near future, U.S. states are likely to fill in the gaps, and regulators are expected to expand existing laws on topics such as data privacy and discrimination to cover the AI environment, noted David Kessler, head of the U.S. privacy practice at Norton Rose Fulbright.

“There are conflicting pressures with these big analytics models wanting more and more data to make them more accurate and effective, which runs counter to the push to make sure companies aren’t holding onto personal data for longer than is necessary,” Kessler said.

This point has been driven home by regulators such as Federal Trade Commissioner Alvaro Bedoya, who in March highlighted the parallels between policing AI and data privacy and warned that the “unpredictability” of this emerging technology was no excuse for companies to ignore their long-standing obligations to protect consumers under a range of existing laws.

Since then, the FTC has continued to ramp up its scrutiny of AI, including by revealing earlier this month that it had opened an investigation into OpenAI to determine whether the software company has mishandled personal data or otherwise engaged in unfair or deceptive practices.

The use of data to fuel AI technologies is also increasingly being addressed in the growing number of state privacy laws, many of which contain restrictions on automated decision-making and discriminatory uses of data, attorneys noted.

“We’re still in the era of largely self-governance in the AI space, but at a bare minimum, regulators are likely going to want to see some kind of corporate governance document like an AI charter that lays out the guardrails and principles that govern the use of AI and protect against unfair treatment of disparate impacts,” said Bahar, the Eversheds Sutherland partner.

Law firms are also paying more attention to AI issues and are coming up with innovative ways to advise their clients on this emerging area of the law. As one example, firms including Womble Bond Dickinson, Baker Donelson and Dykema Gossett PLLC have recently launched multidisciplinary artificial intelligence teams to help steer companies through the burgeoning opportunities and legal risks of the technology.

“Whether your clients are thinking about using AI or are already in the business, or are developing products for other companies to enhance their services, that’s all going to require legal advice for compliance with multiple federal and state laws across a range of different practices, from privacy to employment to litigation,” said Cinthia Motley, director of Dykema’s global data privacy and information security practice group and member of the firm’s cross-disciplinary AI and innovation team.

Litigants, Regulators to Dial Up Online Tracking Scrutiny

Litigation accusing retailers, media providers, tax preparers and hospitals of violating federal law through the third-party tracking and monitoring technologies they deploy on their websites has continued to proliferate, and the ongoing progression of these proposed class actions will be important to be aware of in the second half of 2023, attorneys say.

“Online tracking technology has triggered a lot of creative litigation that’s created a great deal of risk and ambiguity,” said Nancy Perkins, counsel at Arnold & Porter.

These disputes include ones that accuse website operators of violating the Video Privacy Protection Act by disclosing private information related to visitors’ video-viewing habits to Meta through the social media giant’s Pixel analytics tool, as well as those that allege major brands are surreptitiously eavesdropping on users through “session replay” software embedded on their websites, ostensibly to help them better understand their visitors’ online experience.

A “particularly interesting” development in this area has been the prevalence of litigation against hospital systems, telehealth companies and other health care providers over their use of the Meta Pixel tool, noted David Almeida, the founder and managing partner of plaintiffs firm Almeida Law Group LLC.

Given that there’s no private right of action under the Health Insurance Portability and Accountability Act, which prohibits the disclosure of personal health information without consumers’ consent, plaintiffs have had to find other ways “to attempt to vindicate users’ privacy rights,” including by bringing allegations under federal and state wiretapping statutes and asserting various common law privacy claims, Almeida said.

While the case law is still developing, some trends are starting to emerge. This includes a recent flurry of decisions dismissing putative class actions against entertainment providers such as Paramount Global and MeTV that have held that merely providing registration information to a website that may have video content isn’t enough to qualify as a “subscriber” covered by the law, and attorneys say they’ll be watching to see if this pattern holds and what others may emerge.

Federal regulators have also shown in recent months that they’re paying close attention to this area as well.

The FTC has announced several major health privacy actions so far this year that have come down on companies for their allegedly unauthorized sharing of personal health information with Facebook, Google and other advertisers.

The commission also took the significant step of teaming up with the U.S. Department of Health and Human Services’ Office for Civil Rights earlier this month to warn dozens of companies about their responsibility to guard against unauthorized data disclosures.

In their joint letter sent to approximately 130 hospital systems and telehealth providers, the agencies flagged the “risks and concerns” of integrating into their websites and mobile apps online tracking technologies such as Facebook Pixel and Google Analytics that may lead to the impermissible disclosure of consumers’ personal health data to third parties.

The regulators warned that they’re “closely watching” developments in this area, and they “strongly encouraged” providers to review laws that could apply if such unauthorized disclosures are made — including HIPAA, Section 5 of the FTC Act and the FTC’s Health Breach Notification Rule — and to “take actions to protect the privacy and security of individuals’ health information.”

“It will be interesting to see if companies cease these online tracking practices,” Perkins said. “While it’s such a valuable way of marketing, it’s also clear they could run into trouble here.”

–Editing by Jay Jackson Jr. and Kelly Duncan.

Read more at: https://www.law360.com/classaction/articles/1704424?utm_source=shared-articles&utm_medium=email&utm_campaign=shared-articles?copied=1

Leave a Reply

Your email address will not be published. Required fields are marked *