Business law

3 things companies need to know when New York City begins implementing an AI hiring law

[ad_1]

Head to our on-demand library to view sessions from VB Transform 2023. Register here


In July, New York City officially started crack down on companies that violate its first law in the country (New York City Code 144) that control use artificial intelligence in employment decisions.

Even companies that are not headquartered in New York City but have operations and employees there — especially international companies — must comply with this new regulation. The law does not explicitly prohibit AI, but provides guidance on how the technology should be used when making employment decisions.

This is an important distinction. Organizations across industries (healthcare, manufacturing, retail, and many more) are already using smart technology in a number of ways. Examples include oncologists using artificial intelligence to help diagnose cancer with a high degree of accuracy, predicting purchasing patterns in manufacturing and retail to improve logistics and the consumer experience, and nearly all recorded music today using automatic tuning to correct or enhance the singer’s pitch.

When it comes to personnel, companies are currently using artificial intelligence to match relevant candidates to suitable jobs – and that’s the focus of NYC 144. After several delays, the new law has made many companies a bit nervous at the time of job openings. remain elevated Unemployment is close historical lows.

It happened

VB Transform 2023 on request

Did you miss a session from VB Transform 2023? Sign up for on-demand library access to all of our featured sessions.

Register now

Organize, yes

Bold-faced tech names like the head of Microsoft, Brad Smith, and Google CEO Sundar Pichai has endorsed a regulatory framework. Transparency is always a good thing. “I still think AI is too important to regulate, and too important not to regulate well,” Pichai wrote in his article. financial times.

Conversely, if not implemented well, regulations can negatively impact job seekers and hiring managers by restricting the insightful information and customized expertise that are the core of a positive hiring process.

Thirty years ago, recruiters sifted through piles of resumes on their desks. Candidates have often been selected based on inconsistent criteria, including Ivy League education, position within the mound, and a bit of luck based on how high their resume is in the mound—over which they have no control. Humans’ unconscious biases add another untraceable filter when technology is not involved.

AI has provided scalability and accuracy to help level the playing field by matching individuals with the required skills and experience to the right roles, regardless of their place in the proverbial resume pile. AI also helps recruiters see the whole person and skills an individual may not have thought to highlight on their resume. AI cannot stop the recruiter or hiring manager from taking shortcuts. But it can make it less relevant by highlighting relevant resumes that might be missing in the pile.

Combining human control with AI support is a good way to combat bias in two ways. First, one reason for bias in human decision-making is that people often look for shortcuts to solve problems, such as focusing only on candidates from Ivy League universities rather than investing the time and effort to find and evaluate candidates from non-traditional backgrounds.

Second, detecting bias by reporting negative impacts can reveal this bias in real time, allowing the organization to take action to stop such biased decisions.

there potential laws It is being discussed in Europe which may restrict the use any Personalization in the Talent Acquisition Lifecycle. This may hinder employment opportunities not only for external candidates, but also for employees already in the company looking to move into a new role.

Pulling back hard from the reins of these technologies can actually go further more Bias because the imperfect human being will then be solely responsible for the decision-making process. This could result in a fine under New York law and additional federal penalties since the EEOC decided companies warned They’re on the hook for any discrimination in hiring, firing, or promotions – even if it’s unintentional and regardless of whether it’s AI-assisted.

Looking beyond fear

No law is perfect and New York City’s new legislation is no different. One requirement is to notify candidates with artificial intelligence – such as cookie notices on websites or end-user license agreements (EULAs) that most people click on without really reading or understanding them.

Words matter. When reading notices of the use of artificial intelligence, people can easily evoke images of the doomsday depicted in films about technology outperforming humanity. There are countless examples of fear-inducing new technology. electricity It was thought unsafe in the nineteenth century, and when bikes Introduced for the first time, they were seen as reckless, ugly, and insecure.

Explainability is a basic requirement of this regulation, as well as simply being good practice. There are ways to reduce fear and improve notices: keep them clear and concise, and keep legal jargon to a minimum so that the target audience can internalize and understand the AI ​​being used.

Get compliant now with AI regulation

Nobody intentionally wants to contravene New York law. So here are three recommendations for business leaders while you work with your legal counsel:

  1. Examine the content of notifications and the user experience. How well have you explained in plain English the use of these techniques to job seekers? Einstein said, “If you can’t explain it simply, you don’t understand it well enough.” Let people know you’re using an algorithm on a job site. Examples include, “Here’s what we collect, here’s how we’ll use it (and how we won’t), and here’s how you can control its use.”
  2. Get involved in the organizational process and get involved right away. The only way to stay ahead of regulation and ensure compliance is to know what’s coming. This was a challenge to the General Data Protection Regulation (GDPR) in Europe. the compliance period The General Data Protection Regulation (GDPR) started in May 2018. Most companies have been not ready. The penalties were beautiful prominent. Apply those lessons learned to New York law by engaging with like-minded organizations and government agencies at the leadership and executive levels. This not only opens up your organization to the conversation, but also allows for the introduction and harmonization of policies, procedures, and practices.
  3. Be ready to audit. Take a look at the entire process, and work with your technology providers to determine where these tools make recommendations and ensure fairness and accountability are applied. New York requires businesses to have a Independent AI auditors. Audits have long been part of the business landscape, as have accounting, IT security, and federal health information privacy. The next question is: Who does the auditors’ audit? It is about whether there should be a body made up of not only government, but also private and public entities with expertise in these areas to develop reasonable guidelines.

So, get to know your process, get an internal audit ready to go and train your staff on all of this.

One country, one law

My final word of warning to business leaders is to watch their state legislators, who might follow New York’s lead with their own regulations. We can’t have 50 different versions of AI anti-bias legislation. The federal government needs to step in and bring the states together. There already differences Between New York and California. What will happen in Nevada, Colorado and other states? If state legislators create a patchwork of laws, companies will find it difficult to operate, not just comply with them.

It would be wise for state legislators and regulators to reach out to colleagues in neighboring states and ask them how they deal with AI in human resources. Because if countries share borders, it is better to go along with each other because they share job seekers.

Lawmakers on Capitol Hill have pointed out interest In working on the AI ​​Act, though what that would look like and whether it would include language about employment is not known at this time.

Revolutionary technologies move at lightning speed compared to the legislative process. The concern is that by the time the House and Senate vote, the technology will have far outpaced the requirements of any bill that passes. And then the wheel of legislation becomes a hamster. “It’s a very difficult issue, AI, because it moves so fast,” said the New York senator. Chuck Schumer. He is absolutely right. All the more reason why federal lawmakers need to get ahead of the states.

The hiring and promotion process will only improve if there is more, not less, data and user input for AI systems. Why do we ever go back?

Cliff Yurkiewicz is Vice President, Global Strategy phenomenon.

Data decision makers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including technical people who do data work, can share data-related insights and innovations.

If you want to read about cutting-edge ideas, up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even think Contribute an article Your own!

Read more from DataDecisionMakers

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button