How regulators use machines to make decisions
ASIC did not directly address questions about these automated programs, but in a general statement, a spokesman said claims that artificial intelligence was used were false.
“The automation of some processes must not be confused with the use of artificial intelligence,” said the spokesman. “Like other regulators around the world, we believe AI will play a role in the future. When we use AI, we do so responsibly, fairly and safely, guided by principles that we will define and transparently share.”
ASIC receives vast amounts of data on possible wrongdoing, with 20,000 reports, complaints and other types of information filed this fiscal year to date, the spokesman said.
“We cannot and will not investigate every instance of possible wrongdoing that comes to our attention,” he said. “We must make careful and sometimes difficult decisions and use our regulatory expertise, judgment and experience to evaluate a wide range of factors and information.”
UNSW Professor Toby Walsh, an AI expert, said the government is a natural candidate to use artificial intelligence. It was collecting massive amounts of data, was in a position of trust, and was under constant pressure to do more without spending more money.
“Your mission expands the cheaper you can do things,” Walsh said.
“A significant impediment to meaningful debate about future government control of machine technology use is an almost total lack of transparency about that use.”
NSW Ombudsman Report 2021
ASIC is far from the only regulator relying on artificial intelligence and automated systems. Some state police forces are using AI to identify suspects from security camera footage and have been testing systems to try to assess the risk of repeat domestic violence offences. The tax office uses it to mark tax returns that appear unusual. The anti-money laundering agency AUSTRAC helps to find suspicious transactions. APRA, another regulator that oversees banks and pension funds, noted in its latest annual report that it is exploring automated risk assessments. AUSTRAC did not respond to a request for comment. APRA declined to comment.
Those are probably the tip of the iceberg, according to a 2021 report by the NSW Ombudsman, an independent government regulator.
“A significant impediment to meaningful debate about future government governance of machine technology use is an almost total lack of transparency about that use,” the report said.
Federal Treasury Secretary Katy Gallagher said: “Public services must act ethically, treat people with respect and make informed and evidence-based decisions. As automated decision-making becomes more available, the Robodebt Royal Commission has stressed the need to act cautiously.”
Hanging over the perception of all of them is the best-known automated system of government, the rogue welfare program colloquially known as “robo-debt,” which is now so infamous that it has unleashed a royal commission seemingly delivering damaging new information every day.
Walsh didn’t want the failure of robo-debt to stop the government from using automated systems that were flawed from the start because they assumed that welfare recipients’ income was evenly distributed when the law required it to be measured week by week. He argues that public servants are cautious by nature and should learn from the private sector’s approach of bringing products to market quickly and iterating, while remaining aware of the damage that wrong government decisions can do.
“The bad thing about robo-debt was that they stuck by their guns when the evidence came that it was doing harm,” Walsh said.
“If they iterated quickly and said, ‘Well wait a minute, let’s change the calculation or think about whether we’re doing this right,’ once they started having evidence in the first year of the program, then most of the damage would be through Robo debt has been averted.”
UTS professor Ed Santow, a former human rights commissioner who led a major report on government AI, would prefer a different approach. He said there are areas where governments can experiment with AI, e.g. B. to improve train schedules, but others in which they should be careful.
“In high-stakes decision-making, it’s not safe to start out with what the tech world would call a ‘beta product’ and then just iterate over your customers because they’re citizens making decisions that are life-changing can be”, called Santov.
“The other difference with the corporate sector…is that in the corporate sector people can generally shop elsewhere. The state has a monopoly. It’s the only game in town.”
When ASIC refuses to seek a director, there are often no alternatives with the resources to do so. In the Wilson case, ASIC acted and funded liquidators so they could do more work. The spokesman confirmed the ban has not been updated on Wilson and his wife Melinda Wilson, who has also been suspended. Melinda Wilson did not respond to calls and texts asking for comment from her and her husband, whose previous number and email address are inactive. He did not respond to a LinkedIn message.
“We believe this ban was an important action despite Mr Wilson’s bankruptcy,” the ASIC spokesman said. “ASIC has the power to bar an individual from serving as a director for up to five years and if a director commits systemic or serious misconduct, even if already bankrupt, ASIC will take steps to ban him to protect the public .”
The Business Briefing newsletter delivers important stories, exclusive coverage and expert opinions. Sign up to receive it every weekday morning.
https://www.smh.com.au/business/companies/robo-gov-how-regulators-are-using-machines-to-make-calls-20230306-p5cpve.html?ref=rss&utm_medium=rss&utm_source=rss_business How regulators use machines to make decisions