On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews, podcasts, conversations, and more.
OpenAI trained Whisper on 680,000 hours of audio data and matching transcripts in 98 languages collected from the web. According to OpenAI, this open-collection approach has led to "improved robustness to accents, background noise, and technical language." It can also detect the spoken language and translate it to English.
I was an unwilling participant. Visual Studio has it baked in now. One day, it started showing autocomplete prompts.
Its useful for writing boilerplate like properties, lists etc. It sometimes knows what loops or conditions I want. Its a glorified autocomplete, similar to messenger apps.
Its not sentient, it can't write full code yet. Helps to write faster, but doesn't take away mental work yet of architecting
now they can be digitally looted from anywhere in the world from the comfort of the exploiters home or other remote location.
Putin may've rescued Ukraine from far more than he realizes. Or maybe he knew!