
Artificial intelligence has become a common tool for improving writing, photos, and videos. Now, YouTube is under fire for quietly using AI to enhance uploaded videos without informing creators. While some argue the changes might be an improvement, the bigger concern is that the enhancements were applied without consent.
This is not a new complaint. For months, creators have noticed that their videos look different once uploaded to YouTube, even though they did not alter them. Many suspected some form of AI intervention, and now Google has confirmed those suspicions. The company admits that certain videos have been enhanced during processing, though it insists that the process does not involve AI upscaling or generative AI.
Instead, YouTube says it has been running an experiment on select Shorts. According to the platform, traditional machine learning is being used to unblur, denoise, and improve clarity in videos. The company compares it to how a modern smartphone automatically processes video to deliver better quality. YouTube says the goal is to provide viewers with the best possible experience, while also considering feedback from creators.
Still, not everyone is convinced. Musician and YouTube creator Rhett Shull noticed his Shorts looked smeary and unnatural compared to uploads on Instagram. He described the look as almost like a cheap deep fake and questioned whether YouTube was secretly applying AI filters. In a video, he called the issue a massive problem because it risks eroding the most valuable thing a creator has: trust with their audience.
Other well-known creators have noticed similar changes. Rick Beato and Hank Green are other big names on the list. After seeing many discussions on Reddit about the so-called “oil painting” effect appearing on Shorts. Many feel this is a form of non-consensual AI upscaling applied to content without any warning, leaving creators frustrated and viewers confused.
The issue goes beyond how the videos look. For some creators, avoiding AI is a matter of principle, part of how they define their work. But with viewers spotting clear differences and assuming creators used AI themselves, accusations of dishonesty have emerged. This damages trust not only in YouTube but also in the individual creators who depend on that trust to build their communities.
YouTube insists there is no use of generative AI in the experiment, but that explanation does not address the main concern. The lack of transparency has led to widespread criticism. Even if the enhancements are well-intentioned and designed to improve quality, creators argue that the changes should not have been applied without consent or at least a clear notification.
Google is one of many companies pushing AI into different services, and in most cases, this does not spark controversy. But when AI is used to alter user-generated content without warning, it becomes a question of ethics and accountability. For creators who have spent years building credibility with their audiences, YouTube’s decision has left them in a difficult position.
What remains clear is that trust is at the heart of this issue. Without transparency, even the smallest changes can create confusion and anger. As YouTube continues its experiments, the company faces growing pressure to be open about how AI is applied to user content.