1. Predictable artificial intelligenceLexin Zhou, Pablo A. M. Casares, Fernando Martínez-Plumed, John Burden, Ryan Burnell, Lucy Cheke, Cèsar Ferri, Alexandru Marcoci, Behzad Mehrbakhsh, Yael Moros-Daval, Danaja Rutar, 2026, izvirni znanstveni članek Opis: Many areas of artificial intelligence, and machine learning in particular, aim at being probably correct, i.e., valid on average, rather than pursuing the idealistic goal of being provably valid for all inputs. However, AI systems could still be predictably valid, such as an imperfect robot deliverer for which we can reliably and precisely predict the task instances for which it is correct and safe, its valid operating range. “Predictable AI” is a nascent research area that explores ways of anticipating key validity indicators (e.g., performance, safety) of present and future AI ecosystems. We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI, and thus should be prioritised over performance. We formally characterise predictability, explore its most relevant components, illustrate what can be predicted, describe alternative candidates for predictors, as well as the trade-offs between maximising validity and predictability. To illustrate these concepts, we bring an array of illustrative examples covering diverse ecosystem configurations. “Predictable AI” is related to other areas of technical and non-technical AI research, but have distinctive questions, hypotheses, techniques and challenges. This paper aims to elucidate them, calls for identifying paths towards a landscape of predictably valid AI systems and outlines the potential impact of this emergent field. Ključne besede: predictable AI, general-purpose AI, AI safety Objavljeno v RUP: 09.02.2026; Ogledov: 68; Prenosov: 3
Celotno besedilo (4,86 MB) Gradivo ima več datotek! Več... |