Slate: A.I.’s Groundhog Day
December 28, 2023
On top of all that, many aspects of these technologies make them “genuinely difficult to regulate,” says Ben Winters, senior counsel at the Electronic Privacy Information Center, a digital privacy research nonprofit. For instance, placing regulations on A.I.’s ability to generate text, images, music, or other information could be unconstitutional, given First Amendment protections. Some have proposed that companies should be held accountable for their algorithms’ output, but it’s unclear exactly where a company’s liability for information disseminated on their platforms begins and ends—and it could have chilling effects for freedom of speech online in general. Questions of accountability become extra tricky when dealing with algorithms, which function as a “black box”—many engineers who helped train and shape the technology can’t be sure exactly why the system they built ends up behaving one way or another.
There are other gray areas that are difficult to navigate, like which types of online data are fair game for algorithmic inputs and what sort of consent, if any, companies should seek. In 2019, I reported on a case in which researchers scraped publicly available YouTube videos to train a model to guess what a person looks like—and one of the unwitting participants was surprised to learn he’d been part of the model’s training data. It’s not technically illegal, but it doesn’t feel ethical, either. Questions about online data privacy “dovetail with a lack of privacy protections that we still don’t have yet,” says Winters.
… I asked Winters for details about how companies were carrying those out: What types of data are assessors assessing? Who are the assessors—people who work within a company, or third parties? “There isn’t a universal definition of it; every time it’s being used, it means something different,” Winters says. “Whether it’s valuable and helpful to consumers—well, the devil’s in the details.”
Read more here.