This paper finds that 38.9% of tasks in jobs involve large language models, with 80% of workers spending 20% of their time on such tasks.
Its mapping of risk exposure shows that LLMs directly expose 12.4% of tasks to privacy risks, 13.7% to cybersecurity risks, 13.6% to breach in professional standards risks, 14.1% to unethical or harmful bias risks, 10.6% to misinformation and manipulation risks, 26.4% to safety and physical harm risks, 26% to liability and accountability risks and 9.8% to intellectual property risks. 

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée.