FFmpeg is 20 years old today

462 points by Daemon404 4 years ago | 93 comments
  • ghoomketu 4 years ago
    The creator of ffmpeg Fabrice Bellard(1) is truly a genius and has created so many amazing software that have been an amazing benefit to the world.

    (1) https://bellard.org/

    • jjeaff 4 years ago
      How much of it is genius and how much of it is just an amazingly productive work ethic?

      Or maybe it requires genius to be this productive?

      I'm just amazed at what some people are able to build. Especially open source. Does he make money from ffmpeg? Or just do it as a side gig?

      Reminds me of the photopea guy. He has basically built a browser based clone of photoshop. Something that took Adobe thousands of engineers and decades to build up over time.

      • fouronnes3 4 years ago
        Yeah, genius could even be an understatement in my opinion. He is up there with the greatest of the greats. Anyone with a quarter of his productive output would easily qualify as genius already. He has basically no presence online besides his remarkabke work and of course that only strengthens the legend.
        • rockwotj 4 years ago
          > He has basically no presence online

          Probably part of the explanation for the productivity XD

        • umvi 4 years ago
          > Something that took Adobe thousands of engineers and decades to build up over time.

          It's not true in every case, but the efficiency of solo developers and tiny teams is a massive advantage vs. inefficient huge teams crippled by corporate bureaucracy and layers of management. Even though the latter has astronomical budget vs the first, it doesn't actually guarantee better software.

        • nl 4 years ago
          It's genius. LZEXE creator, 3x Obfuscated C Code Contest winner, record breaking Pi calculator.

          Fabrice Bellard started FFMpeg, but doesn't work on it anymore. He also started a few other little projects incuding most famously QEMU.

          • fouronnes3 4 years ago
            Yeah definitely. If programming genius had a unit it should be called the bellard.
            • CyberDildonics 4 years ago
              Don't forget tcc, the tiny C compiler. You can compile all 6MB of sqlite3 in 0.1 seconds.
            • mixmastamyk 4 years ago
              Definitely impressive, but don't forget all the open source libs available now. Decades ago you needed all those folks to build the infrastructure that didn't exist these days for free.
            • vernie 4 years ago
              Fabrice's no-frills website has way more rad shit than all the bullshit Nathan Myhrvold pays PR firms to promote.
              • wazoox 4 years ago
                In 2004 I started using QEMU for testing (because it was so much faster than Bochs). My business partner wrote a note to fabrice@bellard.org to thank him for his nice and useful tool, and he replied back about how pleased he was that we used his tool :) Amazing guy.
              • blululu 4 years ago
                FFmpeg is one of those pieces of infrastructure that unpins modern media in such a fundamental way that it is hard to even imagine life without it. In the words of John Carmack: https://twitter.com/id_aa_carmack/status/1258531455220609025...
                • DC-3 4 years ago
                  It's funny when you think about it how much money is made by people writing relatively trivial tech on top of incredibly complex and technically impressive software written for far less reward and acclaim (at least, outside of engineering circles).

                  https://xkcd.com/2347/

                • js2 4 years ago
                  Literally just used it today to re-sync the audio and video in some movies I recorded where I needed to add a 210ms delay to the audio. There's no better tool that I know of for doing this, even on macOS:

                      ffmpeg \
                          -i "$input" -itsoffset "$offset" \
                          -i "$input" \
                          -map 0:0 \
                          -map 1:1 \
                          -acodec copy \
                          -vcodec copy \
                          "$output"
                  • npteljes 4 years ago
                    Another one I used to merge subtitles into their respective mkv video files:

                        find . -name "*.mkv" -exec ffmpeg -i "{}" -i "{}.srt" -map 0 -map 1 -c copy "{}_merged.mkv" \;
                    • cyphar 4 years ago
                      No hate on ffmpeg but you can do this with mkvmerge as well.
                      • npteljes 4 years ago
                        Quite a nice syntax too:

                            mkvmerge -o out.mkv in.avi in.srt
                        
                        Thanks for the recommendation.
                    • 4 years ago
                      • jedberg 4 years ago
                        I'm curious as to how you figured out the audio was 210ms ahead of the video.
                        • js2 4 years ago
                          So I had recorded my son playing keyboard for a recital. The audio was recorded via MIDI into GarageBand into QuickTime Player. The video was off my iPhone via EpocCam over USB also into QuickTime Player. I was able to get the audio routing setup via a Multi-Output Device I configured in Audio MIDI Setup.

                          The video had a slight delay which I didn’t quite notice till after I’d recorded it.

                          I basically wrote a quick script to generate variations of the recorded movies with delays from 50ms to 500ms in 50ms increments and then just watched each of them to see which appeared most correct for his hands playing the keys and the sound of the notes. It was slightly over 200ms and less than 250ms. 210ms seemed the best.

                          I have now learned the importance of a clapperboard.

                          (Originally I was going to record the audio in GarageBand and the video on my phone and join them later, but I was learning as I went and I wanted to see if I could record the two streams together.)

                          • jedberg 4 years ago
                            That's an awesome story.

                            > I have now learned the importance of a clapperboard.

                            Ha! Good tip! Never thought about using one when doing recordings of recitals and stuff. Good idea.

                          • hiq 4 years ago
                            Players like vlc and mpv allow you to dynamically shift the audio wrt the video while playing, so one can just play around a bit until the perfect setting is found.
                          • arendtio 4 years ago
                            Also just used it a few hours ago, to resize and convert a video in order to send it via e-mail :-)
                            • gsich 4 years ago
                              maybe mkvtoolnix.
                            • h2odragon 4 years ago
                              I remember reading about it initially and saying "that sounds ambitious,and useful"... I think its safe to say they changed the world for the better with that single A/V swiss army knife tool. Amazing achievement.
                              • cpach 4 years ago
                                I remember in the 90s when Microsoft, Apple and RealNetworks pushed hard for their own codecs and tools. IMHO those where dark times for digital video. ffmpeg and Xiph.org played a large part in changing that.
                                • zinekeller 4 years ago
                                  ... which is really disappointing when you learn that H.261/MPEG1, a broadcast-ready standard, was already frozen at that time. (Of course, even more worry-free formats are already here - it is just the time needed to replace existing equipment).
                                  • niftich 4 years ago
                                    You can blame the lack of uptake of MPEG-2 in the personal computing space on the license terms set by the MPEG LA, which charged patent licensees an amount based on the number of encoders and/or decoders they wanted to ship [2], and because DCT codecs were too slow to decode on CPUs until the mid-1990s. There were competitive proprietary codecs that could be licensed more favorably and performed better. These got built into codec-agile platform media stacks like QuickTime, Video for Windows, and later, Real and Flash, and these stacks would gain support for newer codecs as the technology progressed.

                                    It wasn't until 1999 with MPEG-4 that the licensing situation made MPEG codecs an option for desktops again, by which point other proprietary DCT codecs were widely deployed. Both Microsoft and Real dabbled with tech that came out MPEG-4, but it wasn't until that H.264 became decodable by mainstream computers that everyone finally settled down around an MPEG codec. Then, Google bought On2. (But that's a long story for a different thread [1].)

                                    Here's some more detail for this thread:

                                    H.261 pre-dates MPEG-1 by a bit. H.261 was compiled by the ITU-T chiefly to support video telephony, because the predecessor codec H.120 was conceptually neat, but practically garbage. H.261's design choices and JPEG's design choices factored heavily into MPEG-1 Video, and while the MPEG-1 standard also delivered a program stream and 3 techniques of audio compression, it was hardly broadcast-ready: it was missing a transport stream for unreliable media, and it was missing support for interlaced video. These deficiencies were rectified by MPEG-2 in 1996.

                                    In the early 1990s, DCT-based video like MPEG-1 was too slow on contemporary mainstream desktop hardware, so people developed vector quantization codecs that would allow realtime playback: Cinepak, Indeo, MS Video 1, Smacker, and TrueMotion S. Some of these saw a lot of use in video games, and some were picked up by OS and platform-based stacks for multimedia, like QuickTime, Video for Windows, and then later, Real and Flash. Those platform stacks were designed with codec agility so that more advanced codecs could be switched in as developments came along. But correspondingly, good candidate codec needed to have a favorable IP situation for widespread deployment, so codecs were often homegrown from skimming contemporary sources (including standards), or licensed from smaller companies instead of the alternative: paying the MPEG LA for a license per decoder deployment, and then still having to procure decoder software.

                                    Later, processors became more powerful and you could decode DCT in real time. In 1996, H.263 came out as an improvement over MPEG-2/H.262 at low bitrates. Other codecs followed suit and we ended up with RealVideo, Sorenson Spark in Flash Video, and VP3. The importance of OS-delivered or runtime-delivered media stacks rose; Real, QuickTime, Video for Windows, and Flash. These platforms would go on to dictate the containers, the video codecs, and audio codecs that were supported, so they had a dominating influence on multimedia on personal computers for the next decade.

                                    MPEG-4 came out in 1999, which brought both MPEG-4 Part 2 (-> SP, ASP, DivX, Xvid) and H.264/AVC. The former was a bit of an improvement over H.263, while the latter had great potential but mainstream processors were too slow once again. Microsoft cribbed MPEG-4 Part 2 to make a version of a Windows Media Video codec, RealVideo did the same, and DivX and later Xvid rose to prominence in the early 2000s. The latter two gained notoriety for being used for DVD rips and filesharing; one could convincingly compress a 480p main title from a DVD in MPEG-4 Part 2 to ~700-1000 MiB, 700 MiB would fit on a CD.

                                    Right around this time, Microsoft was showing off their latest WMV to prove you can have HD movies on a DVD. But content owners wanted bigger disks with better DRM, in part to make filesharing more of a hassle, and were leaning towards H.264 as the codec, so Microsoft got their latest VMV standardized as VC-1, and based their marketing on easier decoding, interlacing support, and a favorable patent situation. They got their VC-1 written into the HD-DVD and Blu-ray standards alongside H.264 and the aging MPEG-2. But it turned out there are patents on VC-1 after all and a patent pool was set up by the MPEG LA just the same; and then hardware finally caught up to decode H.264, so everyone settled on an MPEG codec at last. Briefly.

                                    [1] https://news.ycombinator.com/item?id=15845114 [2] https://www.mpegla.com/programs/mpeg-2/license-agreement/

                                    • cpach 4 years ago
                                      When did they freeze it?
                                    • giantrobot 4 years ago
                                      How were those "dark times" for video? At least for Real and Apple they were trying to solve then-unsolved delivery problems. Apple's video stack also covered desktop video editing/production.

                                      Video is hard. It's a lot of data that needs to be read, processed, and displayed under tight time constraints and needs to be synchronized with associated audio playback.

                                      This is all made more challenging if you're trying to do it on storage, bandwidth, memory, and processing constrained consumer computing hardware. Compression somewhat solved the storage and bandwidth problems but not processing. Better fidelity codecs needed a lot more processing power to decode.

                                      In the early 90s you had MPEG-1 at the high end of the quality scale but was so processor intensive you needed decoder ASICs in consumer hardware just to play it back. Then you had codecs like Cinepak that were far less processor intensive but middling quality. Then you had much lower fidelity like Microsoft's Video1, Apple Video, and even Smacker which had very low decoding requirements but didn't look great.

                                      Network delivery of any of those in consumer hardware was a pipe dream when 14.4K modems were still rare and 9.6k were common. The h.261 codec, which MPEG-1 was based on, had a minimum bitrate of 64kbps which was out of reach of pretty much everyone.

                                      Besides the hardware decoding requirements of MPEG-1 it was entirely unsuited for editing. Both QuickTime and Video for Windows were meant for editing on consumer desktop machines. The codecs they supported were meant for editing and then delivery (on CD-ROM typically).

                                      In the mid to late 90s processing power had advanced such that MPEG-1 and h.261/3 could be decoded real-time in software. RealVideo and Sorensen Video 1 were both based on drafts of the h.263 spec which included video conferencing over POTS connections in its design criteria.

                                      Again I'm not seeing dark times for digital video. There were lots of codecs because they had different uses, limitations, and strengths. The h.26x codecs were designed for video conferencing and it was Real and to a lesser extent Apple that realized they were also useful for streaming over the internet. Both MPEG-1/2 were unsuited for streaming as they didn't support variable frame rates and handled low dial-up bitrates poorly at best. It wasn't until the MPEG-4 überspec that internet streaming, video conferencing, and disc-based delivery settled under a single specification.

                                      While ffmpeg is an amazing project and widely used, it didn't really do anything to settle the proliferation of codecs and containers. It really was MPEG-4 that allowed for that to happen to the extent it's happened.

                                      • axiolite 4 years ago
                                        > Both MPEG-1/2 were unsuited for streaming as they didn't support variable frame rates and handled low dial-up bitrates poorly at best

                                        Nonsense. Variable frame rate is nothing but a very minor bandwidth saver. MPEG-1/2 have a tremendous amount in common with MPEG-4 ASP video and can perform almost as well in most scenarios you can draw up. MPEG-1/2 were mostly hobbled by the use of old, basic encoding hardware and the constrained parameters commonly used (GOP sizes of 12-15) to be compatible with old, basic decoding hardware. Even today you can use ffmpeg to encode MPEG-1, MPEG-2 and MPEG-4 (ASP) video from the same inputs using the same parameters and see very close to equivalent quality at a given bitrate. FFMpeg unfortunately changes it's defaults like resolution, GOP size, etc., to what it thinks you likely want, so you must watch and control for that.

                                  • zinekeller 4 years ago
                                    Some might have commented that FFmpeg is a very complex software but in its defence codecs are complex - especially in the encoding side where you basically have the whole control on the parameters. If you somehow work at a media company (on the engineering side, not the creative side which usually does not know (and care) what is the difference between P and B frames) the switches are important especially in one-way broadcasting (where there is no way to basically go backward and you need to fit everything into a very constrained transmission, which as one of my friends say that engineers at YouTube has a much more easier job due to the two-way communication nature of the internet).
                                    • dstick 4 years ago
                                      Complex is a boon in this case, is it not? It -can- do everything. You need to invest some time into understanding it. There’s no fancy UI, no modern branding. It’s a work horse. It’s so powerful, I love it! Amazing piece of software.
                                      • georgiecasey 4 years ago
                                        Doesn't the H264 codec used by ffmpeg come from the VideoLan/VLC team?
                                        • _Gyan_ 4 years ago
                                          No. The h264 decoder is internal. The repo for the commonly used H.264 encoder x264, is hosted at Videolan but its developers were a separate group. VLC also hosts the ffmpeg repo.
                                      • ed25519FUUU 4 years ago
                                        Ffmpeg is an unusual piece of software. It’s CLI arguments (to me) are baffling. But it’s been around so long that there’s so many answered questions online for virtually anything you want it to do. Don’t bother reading the man page, go to google first.
                                        • riidom 4 years ago
                                          There are also tools with a GUI that make use of ffmpeg (calling them a GUI-wrapper is probably not fair). And ofc, they likely can't do everything ffmpeg CLI can.

                                          One I know is https://handbrake.fr

                                          Are there more?

                                        • heavyset_go 4 years ago
                                          There are wrappers over it, the Python one[1] is pretty good and it lets you build complex pipelines represented with graphs.

                                          [1] https://github.com/kkroening/ffmpeg-python

                                          • the_only_law 4 years ago
                                            Ridiculously useful piece of software, but yeah the docs baffle me more than help me.
                                            • asveikau 4 years ago
                                              I feel like the basics are simple and intuitive. Enough use of it and you will not consult docs for the most basic tasks. Then you start to think "but how do I...?" and go to google. I tend to also forget the answer after a few months.
                                              • bigbubba 4 years ago
                                                > I tend to also forget the answer after a few months

                                                You can add comments to the end of tricky commands with the # character. Then at a later date you can search for commands using the contents of those comments, using something like C-r.

                                                • asveikau 4 years ago
                                                  On some machines I don't have ~/.history. On others, there is a size limit so when I grep the history file after a few months it might not be there.

                                                  Personally, rather than interactively putting comments in my shell input, I prefer to write scripts or take notes in a text file.

                                              • m-p-3 4 years ago
                                                That, and I usually comment my ffmpeg commands to understand what I wanted it to do when I get back to it later.
                                                • donatj 4 years ago
                                                  I felt the same for years, but when you really dig into it you can learn a handful of flags that will get you what you want 90% of the time, and you can Go to Google after that set fails you.
                                                  • kyriakos 4 years ago
                                                    Its cli became so complex to account for all its use cases which could have been replaced by a config file at this point.
                                                    • asveikau 4 years ago
                                                      A config file makes sense if every run takes the same config. But most ffmpeg options are specific to a task and inapplicable to many others.
                                                      • m-p-3 4 years ago
                                                        Yeah I just make bunch of bash files (which also contains comment) for specific scenarios.
                                                    • the_cat_kittles 4 years ago
                                                      its a very good example of what is truly useful conflicting with widely held notions of whats important to make good software. basically, despite the arcane invocations, i find it gets the job done more painlessly than any alternative most of the time.
                                                    • spicyramen 4 years ago
                                                      Ffmpeg saved my life when I was troubleshooting some quality issues video calls and customer was about to cancel the contract, we reconstructed the video from packet captures and ffmpeg helped us track the issue to be a H245 media reset that was happening every now and then, ffmpeg was able to detect the fps variation. All free and well documented
                                                      • xmichael0 4 years ago
                                                        Props out to Michael Niedermayer, he's been maintaining ffmpeg for years and does the bulk of the heavy lifting! Thanks! https://www.linkedin.com/in/michaelniedermayer
                                                        • panabee 4 years ago
                                                          For anyone interested in trying FFmpeg in the browser: https://github.com/ffmpegwasm/ffmpeg.wasm

                                                          No affiliation -- found it while searching for browser-based ports of FFmpeg.

                                                          • mraza007 4 years ago
                                                            The guy who created this is truly a genius. His software has been really helpful for me

                                                            And i truly appreciate people who create useful software and open source it

                                                            • zadokshi 4 years ago
                                                              ^^ upvoting Just in case the author reads this.
                                                            • anonacct37 4 years ago
                                                              So youtube used ffmpeg quite extensively.

                                                              https://multimedia.cx/eggs/googles-youtube-uses-ffmpeg/

                                                              It wouldn't be too surprising if they still did.

                                                              • petters 4 years ago
                                                                It's pretty interesting that YouTube offers such a huge surface area for processing user-uploaded files. Some of those obscure formats should have a security flaw.

                                                                But the ffmpeg processes are surely sandboxed with seccomp or similar, so it probably does not matter at all.

                                                                • dkh 4 years ago
                                                                  The best ffmpeg-based video platform exploit I've ever seen was this one [1] where a user could upload a specially-doctored video to your YouTube-esque platform and while encoding it would trick ffmpeg into reading system files on the server and baking them into the encoded output.

                                                                  Literally the hacker would upload a video, wait for it to encode, and then once it was available for viewing on the website, they'd be looking at a video containing the text from `/etc/passwd` or your envvars or some secrets file or whatever.

                                                                  Yes, most encoding services are very well-sandboxed and even when our tiny streaming platform at the time got hit by this when it was first appeared a few years ago, it was a non-issue because there was nothing valuable or compromising on the encode servers for them to read. (I think Ubuntu AppArmor stopped it dead in its tracks on its own, anyway.)

                                                                  [0] https://docs.google.com/presentation/d/1yqWy_aE3dQNXAhW8kxMx...

                                                                  • franga2000 4 years ago
                                                                    At their scale, they're almost certainly running entire racks' worth of servers entirely dedicated to just transcoding video and only able to access input and output files.

                                                                    (quite likely, those are all actually virtual through some auto-scaling IaaS magic)

                                                                • 4 years ago
                                                                  • irrational 4 years ago
                                                                    So the Web is 30 years old today and FFmpeg is 20 years old today? Did the creator of FFmpeg purposefully start it on the 10 year anniversary of the WWW?
                                                                    • tomaszs 4 years ago
                                                                      Years years ago I have used FFmpeg to create a video tutorial website. It worked like a charm on my CentOS VPS. I was able to scale videos uploaded by users to different resolutions, cut out frames for video cover, get size, resolution, convert video to different formats supported by various browsers. It was amazing and only solution available. The project was eventually suspended because of various reasons, but working with FFmpeg was amazing experience and still, when working with videos it is a great solution. Surely FFmpeg impact on today's internet is far bigger than anyone of us can assume. It is for video internet, what mp3 was for audio internet.
                                                                      • BlackLotus89 4 years ago
                                                                        I sometimes miss mencoder... I always found it way more intuitive than ffmpeg, but I have to admit that by now I'm more proficient with ffmpeg than I ever was with mencoder. Happy birthday ffmpeg, RIP mencoder.

                                                                        Edit: just realized mencoder seems to be still alive and kicking. Thought it was lost in the transition of mplayer starting to use ffmpeg code -> mpv... Have to take a look at mencoder and if it is as easy as I remember :)

                                                                        • ChrisMarshallNY 4 years ago
                                                                          ffmpeg has become the de facto infrastructure for video processing.

                                                                          It seems to have done a great job of keeping up to date, with things like GPUs and shader language and whatnot. I guess that's because he designed a good extension mechanism.

                                                                          I'm grateful for it.

                                                                          • dylan604 4 years ago
                                                                            To do large quantities of transcoding, the industry was dominated in the world of using dedicated server and nodes to create a "farm". One system I managed was a $15k server, with each node being $5k each. Of course, you needed the hardware to run it as well. Now, install FFMPEG, create an image, and then spin up a new instance as needed.

                                                                            A big Thank You to any and all who have brought FFMPEG to where it is!

                                                                          • ggm 4 years ago
                                                                            Possibly more --runtime switches than any other command i use
                                                                            • rootsudo 4 years ago
                                                                              Wow. I never realized when I started using it it was so new.

                                                                              It's hilarious how my introduction to this taught me so much in terms of codecs and compression.

                                                                              All to just share fansubbed anime.

                                                                              • heavyset_go 4 years ago
                                                                                ffmpeg is great. I've run into more than a few consumer devices that use it for transcoding, which I thought was pretty cool.
                                                                                • oger 4 years ago
                                                                                  ffmpeg is a true swiss army knife. I used it last week to add subtitles to interviews I recorded (via SRT files generated from Amazon Transcription JSON). I also upscaled, so-mo‘d and blurred a video background with some X-Massy theme for some video calls. All literally within minutes and a bit of Google. A true lifesaver!
                                                                                  • dkh 4 years ago
                                                                                    FFmpeg is a miraculous tool and I have relied on it for a couple decades now. Beyond the plethora of personal use-cases, it has enabled individuals and smaller companies to possess the same abilities as the bigger guys, who themselves are all pretty much using FFmpeg these days. The only folks who need to build/use alternatives are the platforms at the top of the market and/or with immense scaling needs who are truly pushing beyond the realm of FFmpeg's practical abilities. Twitch is one of those very few, and have been nicely open in discussing it. [1] [2]

                                                                                    Couple highlights for my careering using it over the years. The first was when I was working on production and post-production for a small studio that had a popular web series and was just about to transition to their first "big" shows that would be produced for Hulu. This was when 6K raw video was just becoming a thing, we had over 50tb of footage, GPU decoding was brand-new, Windows machines couldn't work practically with Apple ProRes, lots of challenges. I ended up building a system that did things like transcode raw footage into various formats automatically whenever the server noticed there was new footage, automatically collected and stored the metadata from every shot somewhere we could centrally browse/search/filter it, etc. When it came time to deliver, it would automatically create various outputs for the web. We had to deliver ProRes masters in the end and had recently transitioned entirely to PCs. This was around when somebody successfully implemented a pretty good ProRes encoder for FFmpeg, so we were then able to encode and deliver these huge ProRes outputs not only without needing a Mac, but also entirely on our servers, no longer requiring someone's workstation to be hijacked for an entire day to do this. It all may not seem too revolutionary, but there was no way we would've been able to work with the same efficiency, for the same cost, in the same timeframe otherwise.

                                                                                    A couple years later, at a new (now defunct) video platform with millions of videos and maybe 5 back-end engineers, FFmpeg allowed us to build our own service to encode all uploads into the many resolutions and formats required. Encoding services were (and still are) very expensive, but in just a couple weeks we had our own that ran on standard Ubuntu server instances, spinning up/down depending on load. Immense cost savings, and not tied to any particular company. Shortly thereafter, GPU instances were available from most cloud providers and `nvenc` was available in FFmpeg, so we were able to dramatically speed up the encode process with maybe a day of work by adding GPU encoding into the mix.

                                                                                    These may seem like pretty obvious possibilities now, but it cannot be overstated how insane it was, especially at the time, for tiny and/or cash-strapped teams to be able to do all of this so easily, and that at the tool at the crux of it all, FFmpeg, was completely free. Yes, FFmpeg can be a pain in the ass to figure out, and it's easy these days to take it for granted, but in my opinion it has been truly revolutionary.

                                                                                    [1] https://www.youtube.com/watch?v=LsF5bHRxC_M [2] https://blog.twitch.tv/en/2017/10/10/live-video-transmuxing-...

                                                                                    • ponderingfish 4 years ago
                                                                                      THe best video processing and editing tool ever made!
                                                                                      • Snelius 4 years ago
                                                                                        And still rocks! :)
                                                                                        • PostThisTooFast 4 years ago
                                                                                          And Apple is trying to scare people away from using it.

                                                                                          Sad.