A child protection worker in Victoria utilized ChatGPT to prepare a report submitted to the Children’s Court, prompting the Department of Families, Fairness and Housing (DFFH) to implement a ban on generative AI tools.
According to the state’s information commissioner, the report included “inaccurate personal information” and minimized the risks faced by the child involved. While the report did not alter the outcome of the case, the Office of the Victorian Information Commissioner (OVIC) emphasized the potential dangers that could have resulted from such inaccuracies.
The report should have reflected the child protection worker’s assessment regarding the child’s risks and needs, as well as the parents’ ability to ensure the child’s safety and development. However, ChatGPT was implicated in inaccurately portraying the threats to a young child living with parents charged with sexual offenses.
The case worker entered sensitive and specific personal details into ChatGPT to create the report, violating state privacy regulations. OVIC pointed out that this information was disclosed to OpenAI, a foreign entity, which means it is now outside DFFH’s control.
An investigation revealed multiple signs of ChatGPT’s involvement, including discrepancies in personal details and language that did not align with child protection protocols or training.
Furthermore, the OVIC noted that the use of ChatGPT might not have been limited to this single incident. An internal review by DFFH identified around 100 cases that suggested ChatGPT could have been used to draft documents related to child protection within a year. In the latter half of 2023, nearly 900 employees, or about 13 percent of the workforce, accessed the ChatGPT website, yet no specific training on generative AI usage was provided.
As a result, OVIC issued a compliance notice requiring DFFH to prohibit the use of generative AI tools and to block access to them within the department. The department was expected to inform all staff of this ban by yesterday and has until November 5 to establish technical measures preventing access to various generative AI tools, including ChatGPT.
The case worker involved is no longer with the department. In response to OVIC, DFFH acknowledged the unauthorized use of ChatGPT in this instance but claimed there was no evidence of staff utilizing it for sensitive matters, a statement OVIC disputed. The department characterized the incident as “isolated” and stated that the use of generative AI is not widespread within its operations.