The lack of incident reporting frameworks can lead to novel problems, which can become systemic if not addressed. For instance, AI systems can harm the public by revoking access to social security payments improperly. CLTR’s research, which focused on the UK situation, has shown that the UK government’s Department for Science, Innovation & Technology (DSIT) does not have a centralized and up-to-date system for monitoring incidents involving AI systems. While some regulators may collect incident reports, they may not be equipped to capture the unique harms presented by cutting-edge AI technologies. Therefore, there is a need for a more comprehensive incident reporting framework in these situations.