Script for webm creation?

Has anyone here written a script for webm creation?
Last time I looked, I remember there being several options for windows, but none for linux.

t. has too many large mp4s

Other urls found in this thread:

github.com/Kagami/webm.py
trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo
wiki.webmproject.org/ffmpeg/vp9-encoding-guide
github.com/VP8M8/WebM-Auto-Audio
bugzilla.mozilla.org/show_bug.cgi?id=1305907
bugzilla.mozilla.org/show_bug.cgi?id=1210357
ffmpeg.org/ffmpeg.html
archive.is/WtEn0
tldp.org/LDP/abs/html/
dr-lex.be/info-stuff/videocalc.html
opus-codec.org/examples/
developers.google.com/web/updates/2013/07/Alpha-transparency-in-Chrome-video
a.pomf.se/okifqu.html
en.wikipedia.org/wiki/Named_data_networking
twitter.com/AnonBabble

Yeah, I have, but I'd recommend you use this one:
github.com/Kagami/webm.py
Took features from my script and does it cleaner.

ffmpeg -i input.mp4 output.webm

what the f*g

thanks user

Okay, now you have five minutes to constrain to quality of a long webm to 8MB, burn subtitles into it, and do it as a 2-pass encode.
Protip: you can't

It's simple maths, user. Use your brain.
24 minute video, 1mbps total bitrate, 128kbps audio bitrate, 32MB --> 8MB
32/8 = 4
1000-128=872
872/4 = 218
$ ffmpeg -i in.mp4 -c:v libvpx-vp9 -b:v 218K -c:a copy out.webm
ffmpeg also has subtitle burning and two pass for VP9 but i don't use either and cannot bother to startpage.

For subtitles: trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo

VP9: wiki.webmproject.org/ffmpeg/vp9-encoding-guide

I use a slightly modified version of >>673306 :

#!/bin/shif [[ $# -ne 5 ]];then echo "usage: $0 input.avi start_second end_second output.webm resolution" exit 1fiinfile=${1:?missing input file}startS=${2:-0}endS=${3:-4500}outfile=${4:-$infile.webm}resolution=${5:-848x480}let "durationS = endS - startS"let "bitrate = 12 * 8192 / durationS - 64"if [ $bitrate -gt 600 ] ; then bitrate=600fiprintf "The start timestamp is $startS and the end timestamp is $endS. The duration is $durationS. The bitrate is $bitrate. The resolution is $resolution\n"set -vecho "Performing first-pass encode..."ffmpeg -stats -i "$infile" -ss "$startS" -to "$endS" -pass 1 -c:v libvpx-vp9 -s $resolution -b:v "$bitrate"k -speed 4 -g 240 -threads 4 -tile-columns 6 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -an -sn -f webm /dev/null -yecho "Performing second pass..."ffmpeg -stats -i "$infile" -ss "$startS" -to "$endS" -pass 2 -c:v libvpx-vp9 -s $resolution -b:v "$bitrate"k -speed 1 -g 240 -threads 4 -tile-columns 6 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 -c:a libopus -b:a 64k -map_metadata -1 -sn -f webm "$outfile" -yecho "Completed encoding $outfile"

What is the point of encoding it twice?

Without consulting docs or snippets. It took you how many hours?
I know how simple it is, I fucking wrote a script that used mplayer filters to crop video, for fucks sake.
That doesn't mean it's easy to do quickly.

lmao
Yeah, it's fraught with peril, especially to do it as quickly as possible. Do you even know the fastseek trick? Another fucking thing to extract from a file, cut the subtitles from the point you're cutting the video, and burn it into it.

It's not easy to do quickly.
That's why people write scripts.

the code inside the .bat file, inside this image, might work on linux

do some research, dumbass

To help combat undershooting and overshooting I wrote a script that increases or decreases the audio bitrate of wembs in steps in order to satisfy a target file size. This way there's no need to re-encode if the file ends up being slightly smaller or larger then expected.
github.com/VP8M8/WebM-Auto-Audio

I use along with my script to achieve the highest possible quality in only two commands.

Is VP8 even genuinely better than x264? VP9 takes way too fucking long for my toaster to encode regularly.

Am I missing something here? Is Holla Forums a decade behind on video encoding technology? Ratefactor-based encodes are superior to raw bitrates.

VP9 has gotten substantially better in the recent updates. Compared to libvpx 1.3.0 I've experienced a 10x increase in encoding speed while producing better quality. Just remember to explicitly enable multithreading because by default ffmpeg only uses a singe thread for VP9.

Opus is leagues ahead of AAC so you can steal some of its bitrate and put it towards VP8. The only downside is that VP8 only supports the bt601 color space which is an issue if you don't convert bt709 content (pretty much everything). Most videos are sub-HD and browsers right now are dumb so they get played as bt601 anyways because almost no one tags their color space.


VBR is more suited for our purposes since we have a limited max file size. It's very easy to calculate the required bitrate needed to max out the file limit for a video.

That's true (though libvpx is awful at rate control). But CRF mode might have better quality, as showed in recent Netflix benchmarks. It's only not clear whether you should use 1pass or 2pass (CQP/CRF), Netflix reports are shit… Need additional testing.
Hope to get this merged soon: bugzilla.mozilla.org/show_bug.cgi?id=1305907
Vimeo is also investigating VP9.
I'm glad you're back. Too few people are really interested in WebM, while it might be our bright future with the raise of AOMedia.

You also always want CRF for short clips/loops and CQP helps to prevent undershooting.
Encoding is like art, too many things to consider.

One more thought:

2pass + "-b:v 0 -qmin q -crf q -qmax q" still should be better than 1pass + "-b:v 0 -crf q", because you can use AltRefs only with 2pass.
And obviously Netflix didn't test that.

Yeah libvpx rate control is pretty horrible which is why I made my script. I remember in earlier versions VBR was recommended over CRF because they didn't optimize it yet but I'm not sure in the current version. I experimented a bit with CRF and qmin/qmax a while back but I'd need to retest. It would be interesting if there's a strong correlation between VBR bitrate encodes and CRF + qmin/qmax encodes at the same file size. That way a script could choose the correct qmin/qmax values to use. I have a feeling there's going to be an even bigger margin of error if it's done that way though.
Glad to see it's progressing. We recently got VP9 bt709 tags working in Firefox 52
bugzilla.mozilla.org/show_bug.cgi?id=1210357

I agree it seems like only the same handful of people actually go in depth about WebMs.
We'll see what AOMedia has ready by May when they inevitably get picked for NETVC. I'm predicting (based on VP10 and speculations) once the final version is out sometime in 2018 it'll be right behind HEVC. After a few years of optimization it's possible it could surpass it.

How about you write a script yourself and learn something about encoding videos and bash scripting in the process? Here's some links that'll get you started:
ffmpeg.org/ffmpeg.html
archive.is/WtEn0
tldp.org/LDP/abs/html/
dr-lex.be/info-stuff/videocalc.html
opus-codec.org/examples/

Good luck!

Why is Bash's mathematical comparison notation so awful?

go back to reddit

Maybe this is the thread to ask this in.


Can anyone explain the proper use of the lossless FFV1 video codec? x264 in lossless mode gets a fraction of the file size. I just don't get it. Am I doing something wrong here?

x264 is a lot slower. You can mess with the slice size and do a 2 pass, but in the end it's a fast codec. If you want small, use vp9's lossless mode.

FFV1 that is

Oh so the point really is that it's just fast? Why does that even matter when encoding lossless for archival purposes?

Actually, I spent all day Saturday comparing lossless x264 to lossless x265 and lossless VP9. VP9 actually gives the largest file size of the three. If anyone is aware of a lossless codec that gives better sizes than x264, I'd love to know it.

Why would you ever use the video equivalent of LZO for archival?

That's my question, do people really care about fast access to information when archiving? I want to save as much space as possible on shit that's rarely going to be touched. When I ever need to return to it, I already have the extra tools and slight bit more time needed to manipulate the more compressed format.

What's the usecase for lossless in your case? Just store whather source you have, don't reencode.

What is this, Stack Overflow?

Two reasons. I like to retain video I might end up encoding to another format at for uploading to a video site at some time in the future. It's not always youtube, and the best possible thing to encode is always in flux, so sometimes it's best to keep it in the highest quality on hand instead of transcoding it later with extra quality loss in the process. Second, I like making animated GIFs/PNGs and you absolutely have to keep the source video lossless if you want to make good ones. Especially when it comes to 2D video games.

Just store original. You won't make source better if you convert it to some lossless format.
Do you even understand how compressed→decoding→compressing→compressed chain works?

Seems like you're doing it really wrong.

It's called saving space dummy. I don't want to clog up my HDD with multi-gigabtye videos.

Saving what? You won't make source smaller if you decompress and compress to lossless, it will be larger.

I think someone got lost in the reply chain here. I'm usually starting with raw video in AVI form. This takes up monstrous amounts of space to store, so I sometimes need a format to compress it losslessly.

At the moment I have not found a codec that provides better lossless compression than x264. This is surprising to me in spite of the existence of dedicated lossless codecs like FFV1.

Anyone happen to know of an encoder that gets smaller lossless video file sizes than x264?

Let me guess, the video is from FRAPS or a similar desktop recording software of your video game. If you really think keeping your recording lossless is better then encoding it lossy with a low crf for long term storage you're really autistic. Encoding a high bitrate lossless source video to a lower bitrate lossless video looks exactly the same visually compared to if the source was lossless. Blu-ray rips are an excellent example of this.

No, the rips I generally have a need to save lossless come from low-resolution 2D games, often emulators like MAME which dump to raw AVI video.

Once again I will point out the importance of ripping lossless frames when constructing an animated PNG/GIF of a 2D game. Space is saved in dedicated animation formats by discarding same visuals between frames. When you try to create one from a lossy video it always result in extra large files due to altered pixels and new animation created where it didn't exist originally.

Here is a simple example. One of these was created from a lossless video source. The other was created from a lossy video source. The latter is nearly ten times the file size.

My bad. You could try testing CorePNG to see if it's comparable to x264 in your case. It uses the same compression as png and has the option of compressing each frame as a P-frame (the delta of the previous frame) which looks like it should work well for you.
Also the lossy encoded png has transparency issues on the top and bottom of the image.

Wow, this is neat. Thanks, I've always wondered if someone has tried making a video codec that borrows techniques as animation formats. Gonna have some fun with this.

Latter example was from an old converted GIF actually, meaning it didn't have a comparable frame rate to the other one. Here's a better illustration:

Lossless APNG -> x264 video at CRF=12 -> back to APNG

Now this one is over 10x the file size!

Bonus CRF=1 version for curiosity.

With animation, it pays to stay lossless!

Life is suffering.

Webm (as a container) is overly verbose as it is based on xml. I propose we simply use transport streams adapted for VP9 because there is nothing Holla Forums programmers likes more than reinventing the wheel. .

Anyone know methods of fine tuning webm's. Like if I want to manually add a key frame somewhere or remove key frames on shit like slide shows. These encoders are kinda shit at doing this. I know to few key frames can shit up some players seek functions but idk I just want to fuck around with it for reasons.

I'm having an issue that's driving me fucking nuts, maybe the ffmpeg pros in here can help?

I'm trying to resize a 224x448 video to 336x448 and no matter what syntax I use or which codec I try, none of my damn media players will display it properly. They either blow the display up to 336x672 (VLC and MPV) or display it in the original 224x448 (Parole media player). Each of those are the original video's 1:2 aspect ratio when what I actually want to see is the resized video's proper 3:4 ratio.

When I look up media information VLC reports the right resolution numbers and ffprobe says they're 336x448 too. But nothing seems to want to play it back in the proper format, leading me to think ffmpeg isn't resizing them properly even though it labels the resolution as having changed. Here are some examples I've used:

ffmpeg -i input.webm -vf scale=w=336:h=448 -vcodec libx264 -crf 24 output.mp4

ffmpeg -i input.webm -vf scale=iw*1.5:ih -vcodec ffv1 output.mkv

ffmpeg -i input.webm -s 336x448 -vcodec libvpx -crf 24 output.webm

As you can see, I've tried multiple different syntaxes, codecs, and containers, and they all produce the same problem. What am I doing wrong here!

-vf scale=336:448,setsar=1

Goddamn, so that's how it works. Can you tell me the difference between display aspect ratio and sample aspect ratio?

Bash has no built-in evaluation. [ is an alias for "test".
$ man test

Somehow that makes it display in 9:16 which still isn't quite right. Apparently the display aspect ratio is what needs to be altered. Thanks for the tip user, done quite a lot of video encoding but somehow never knew about this crap. Pretty handy for video games.

DAR = physical (i.e. stored) aspect ratio
SAR = display (i.e. viewed) aspect ratio

Actually documentation is a bit more complex and even slightly contradict that, but that's how I memorize it.

E.g. if you have 1920x1080 video which stored as 1920x1080 and viewed as 1920x1080, then SAR=1:1, DAR=16:9 (check it with "ffprobe vid.mkv").
When you apply scale filter, it tries to preserve original aspect ratio and changes SAR, that's why you need to reset it back to 1:1.
You also can avoid actual resampling and change only DAR, apply "-vf setsar=3:4" in your previous example and result would be SAR=3:2, DAR=3:4.
Though "-vf" doesn't work with "-c copy" so it might be even better to use e.g. "mkvmerge --display-dimensions 0:336x448 in.mkv -o out.mkv".

err, setdar

idk, works for me. setdar=1 will make output video square.

setdar=3/4 is what did it for me. But I seem to have SAR and DAR confused with each other. So the Sample Aspect Ratio isn't the encoded source video, Display Aspect Ratio is?

If one of these aspect ratio options merely tells video players to stretch/interpolate pixels then why is it that it can't be altered without re-encoding?

Indeed, comparing two videos encoded the same but one with an altered Display Aspect Ratio and another without I see a single byte difference in file size. Why is it that this option can't be altered without re-encode?

4 bytes difference if the Sample Aspect Ratio is changed.

Read last sentence of

>bugzilla.mozilla.org/show_bug.cgi?id=1305907
Yay, just fixed for FF53+.

It seems the -aspect parameter in ffmpeg can alter the display aspect ratio without re-encoding. No need for mkvmerge.

Test.

Is this a lie? How does it work?

developers.google.com/web/updates/2013/07/Alpha-transparency-in-Chrome-video

Maybe VP8?

In Chrome.

Do I really gotta download some faggy botnet browser to see this work right?

...

...

Does anyone happen to know if libopus has a VBR constant quality parameter analogous to the 0-10 value that libvorbis has? I've been combing through the documentation here and I can't seem to find anything like it. This is rather shocking to me that the successor to Vorbis wouldn't have this.

What witchcraft is this?

(checked)
The "VBR constant quality parameter" that vorbis has is just really a target bitrate with some VBR constrains. Vorbis is much more limited when it comes to target bitrates then Opus. With Opus you can directly choose any target bitrate from 1-500 kbps.

Okay but how do I make Opus target a particular quality of audio by giving certain parts of a stream higher bit rates than others?

Do you not understand what VBR means? The encoder analyzes the audio and dynamically distributes the bitrate so that complex sounds are given bitrate priority over simple sounds. It's all done automatically. Opus is especially good at this as it lowers the bitrate to 0 when there's like a second or more of silence.

ffmpeg -i in.mkv -r ntsc-film -loop 1 -i mask.png -c:v libvpx -lavfi "[1:v]alphaextract[a];[0:v][a]alphamerge" out.webm

Ah okay. I was curious I usually don't use a parameter for audio like custom rate factor in video encodes.

I bet he supports the windows 10 update system

a.pomf.se/okifqu.html


It's a step in the right direction, there's no reason whole labs should download the same update at the same time from the same central server, if one machine already has it then that should distribute it.

People have put package repos for Arch and other distros on IPFS and I'm sure people were doing some kind of decentralized package stuff on those systems before. It's a great use for something like that, a drop in more or less automatic way to have a distributed repository without much setup or maintenance.

en.wikipedia.org/wiki/Named_data_networking is even better.
I recommend to watch Jacobson talks on CCN/NDN, very inspirating.

So I should stop using bash just because some faggot on the internet tells me not to? Make an argument why I should use sh instead of bash and I'll consider it.

Don't use bash/sh at all, because you don't know it.
It's like PHP or even worse, all that escaping/spaces/wildcards/special chars/error handling/posix deps/flag incompatibilities/gnu extensions/paths/pwd/etc issues.
And it's not really crossplatform (requiring Cygwin/WSL for simple scripts is overkill).

Use Python instead. Or Perl at least. Only faggots use Bash for more than 5 line scripts. They're either 1) don't fully understand that bash is shit and why it's shit 2) too old/lazy to learn something better.

I can't wait for any of these solutions to actually take over, the only reason I'm vested in IPFS is the fact that there's a working (albeit unfinished) implementation that actually shows some promise. If we can have something even better than that I'd like to see it become mainstream but I think a lot of that relies on a transition plan and actually having implementations.

CCN and NDN seem like good concepts that I'd like to learn more about.

Interesting quote too

I watched part of a talk Jacobson gave at some Verisign event and I'm equally as frustrated as he is on all these limitations we currently have for no good reason. Everything we own has the means to connect to eachother in better ways but they just don't. Nothing scales easily, it's all a mess and a waste of time for developers to be focusing on distribution instead of their original goals, letting others worry about how the distribution and routing works.

I want nothing more than for these distributed concepts to become commonplace already.

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
Nice rhetoric.

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
literally all you have is a frame. get everything else redone.

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
Is your OTP the Aizen and his butterfly form?

Why do you think I'm always shit posting in my house?

Where do you think Kogasa learned how to be a Blacksmith?

uh, i play other rhythm games as well and i'm pretty sure osu is a rhythm game

Post pics when you are done. Also the faggots who say to draw out a design first are dumb. Tats should be designed by the soul on the spot. Just let your creativity guide you.

How long until this cinematic universe nonsense stops being the popular fad?

Is jean grey some kind of fat midget now?

more like autisthicc

Scrabble, Boggle, etc.

Would also try Go.

As an aside, Xbox isn't necessarily bad. You just need to stop buying him Call of Doody and go for strategy games. Get Tetris on that machine.

God fucking damn it now for 888

Kinda looks like Atropa belladonna, but the green part should be purple if i remember correctly