[ad_1]
As more and more money is invested in large language models, the new “closed” systems are reversing the trend observed throughout the history of natural language. Traditionally, researchers have shared details about things like training datasets, parameter weights, and code to help reproducible results.
“We know less and less what databases the systems were trained on or how they were evaluated, especially for more powerful systems that are released as products”points out Alex Tamkin, a PhD student at Stanford University whose work focuses on large language patterns.
Tamkin attributes it to the people they care about AI ethics credited with raising awareness of how dangerous it is to move too fast and compromise systems when technology is being deployed to billions of people. Without the work done in recent years, things could be much worse.
In the fall of 2020, Tamkin organized a symposium with OpenAi Policy Director Miles Brundage onsocial impact of large language models. The interdisciplinary panel emphasized the need for industry leaders to establish ethical standards and take measures such as bias assessments before implementation and exclusion of certain use cases.
Tamkin believes that the external services of revision of the Ai must grow with the companies developing the technology, since internal evaluations tend to fall short, and methods of participatory evaluation involving community members and other stakeholders have great potential to increase democratic participation in AI modeling.
Change the focus
Merve Hickok, research director at the University of Michigan’s AI ethics and policy center, argues that it’s not enough to try to get companies to put aside or dampen the hype surrounding AI, regulate themselves and adopt ethical principles. Protecting human rights means overcome the debate on ethics and move on to the one on legality.
Both Hickok and Hanna are following the process that this year will bring theEuropean Union to finalize yours AI law to see how the initiative will address templates that generate text and images. Hickok said she is particularly interested in understanding how European lawmakers hold liability for damage caused by models created by companies such as Google, Microsoft and OpenAi.
“Some things have to be enforced because we have seen over and over again that if this doesn’t happen, companies continue to destroy things and make profit prevail over rights and communitiesHickok explains.
While the measure is completed in Brussels, the stakes remain high. One day after Bard’s demo failed, Alphabet’s stock crash cost approximately $100 billion to the company. “AND the first time I’ve seen a wealth destruction of this magnitude due to a language model error – says Hanna, who however is not optimistic about the possibility that the story will convince the company to slow down its rush to launch the system – I don’t think they will learn anything“.
.
[ad_2]
Source link
