Has anyone else downloaded and messed around with the AV1 source code and examples...

Has anyone else downloaded and messed around with the AV1 source code and examples? I recorded 9 seconds of a 1080p commercial and converted it to 420 y4m so it would work with the sample encoder. I'm currently getting .76 frames per minute on an i5 Haswell. It says I have 6 hours left on the encode. Of 9 seconds. Even if it does manage to be better than h.265, I don't think people will waste a year to encode a movie.

Other urls found in this thread:

youtu.be/IheZzcYUV9w
x265.readthedocs.io/en/default/threading.html#frame-threading
twitter.com/SFWRedditGifs

With hardware acceleration it won't that much of a problem.

I get preety good encoding times with vp9 right now. Why should I "upgrade" to av1 when I can convert that
same 9 seconds of video to vp9 in like twenty seconds?

Fixed. Unless you can find me some non botnet hardware encoder/decoder for vp9 or av1 fuck you. I will stick with vp9's great software encoding and decoding.

It's known that AV1 (and HEVC while we're at that) have a stupid long encoding time, but that's because, for it be open-source and royalty-free, it actually mustn't use any patent and have to circumvent tons of stuff to manage that, causing the slow encoding.
Less storage space, better quality, less downsampling life.
And as said, hardware acceleration will end the problems.
Actually, are you aware that AV1 will be a "mother codec" and every subsequent codec from now on will be based on it? It means that no matter what branch/fork of AV1 people/companies choose to use, if the hardware is AV1-compatible, they'll all be accelerated.
Everything's a botnet.

More on why shit like AV1 is important in this video:
youtu.be/IheZzcYUV9w

Google fucked up immensely with WebP and WebM precisely due to the Codecs, which makes them light but prone to disgrace in the long run.
To companies, this is inadimissible as they lose money and band streaming the shit out of these everywhere. This also imperils old/rare archives.

OP here. Yes I'm aware its big, or will be big.
I've been holding off re-encoding my library until the bitstream freeze and ffmpeg implementation sometime at the end of the year. Hopefully somebody will write an opencl or cuda encoder soon because it's painfully slow. I'm mean holy fuck the 9 second encode is still going and I'm down to .58 frames per minute now.

Is this a troll video? FLIF is a lossless codec. Who the fuck is going to re-encode lossy images 1000 times until they turn to goo?

*autism intensifies*
AV1, if what says is true about it being a universal hardware based codec, could be the botnet of botnets.

For example, a virus project bluebeam may or may not use COBRA, a software suit for (((logging))) on all platforms. COBRA uses a encoding/transfer mechanism known as ZeroMQ, a TCP/encryption stack meant to work on all hardware platforms in exsistence and transfer/IPC methods.

ZeroMQ is a extension to ffmpeg. So if google makes it so the only fast implementation of AV1 and all subsequent codecs are only using ffmpeg's implementation of AV*, then feasably you could get a hardware backdoor for sound/video into every single future system with hardware AV* encoding and have network access to it. Even if it was "open source" software all the way down to the blackbox of hardware decoding/encoding in the proccessor.

So the future of AV1 makes you paranoid, but the present where everything has h.264 hardware decode doesn't bother you? Stay consistent at least if you're going to be stupid.

Well yes that is possible too. But much less likely since there are ARM boards out there you could build yourself that have h264 hardware decoding/encoding for FOSS. But with AV1 I doubt jewgle of all people is ever going to let open hardware become a thing for it unless you pay many shekels. Too much for a hobyist or non bribed/blackmailed corperation.

Uh uh NO the reason, dear boy, it's called HARDWARE decode is because it's IN THE FUCKING HARDWARE. You don't get access to it in ARM FUCKING BOARDS because ARM FUCKING BOARDS aren't FREE OPEN SOURCED OR SOFTWARE

No you faggot. There are design sheets for ARM boards on the internet right now you could download to make yourself. Hence FOSS and hardware built yourself.
Stay mad. Not like anyone would beleive such a thing could happen anyways with AV1.

Yeah, you and whose chip fab dipshit?

The one you built yourself faggot. You don't think there aren't howto's on such things? Or hell the fucking ability to figure it out amongst the goyim? Gas yourself kike.

okay, you're clearly mentally deficient. Opinion discarded.

Really makes one think.

Go back to Holla Forums you dumb faggot. Holy fuck you are dumb.

Explain how I am dumb and how I can improve myself then.

Fuck this shit where's Daala?

FLIF can do both lossless and lossy.

AV1: A Status Update

1. lurk moar on doom9.
2. the bitstream isn't even frozen, any performance comparison is retarded beyond belief.
3. it's gonna be slow anyway.

This is based on Daala's work. Everybody is joining forces to work on this. Except Apple of course.

I know jack shit about working with video rendering, but if it's at all trivially parallelizable, you could just slap some arrayfire on the relevant loops to make at least some of it render on graphics cards.

Off topic rant.
How is this considered beautiful by today's standard? Those soulless eyes, childlike nose, alien shaped face features. If she didn't wear eye liner she would look really weird, like something is missing. I hate it, because it feels like I have been brain washed to think eye lashes should be that prominent. She looks like she was created in factory with other almost the same looking clones that were given few distinct "features". What happened to people having some kind of defining quirks about them.

It's called "beauty industry" for a reason.

More for you to judge.

Straight from 1910, a real woman.

Am I correct? I was really hoping I was wrong about it being a fucking botnet of botnets. But this is being slid by mad by those fagoli posts. Which is to say streiserand effect.

It's shipping next year:
Quality is 20%-30% more efficient than VP9
Encoding complexity will remain substantially higher than VP9 or HEVC even after optimizations:

Can't be worse than VP9 that never got well done multithreading.

At .5 frames per minute, I believe it.

That's not what real time means (unless the video is 0.5 frame per sec, of course).

Spotted the grade schooler.

Better stock up on those Ryzen Threadrippers

Or start buying ARM based phones to run in parallel in the future.

Video encoding isn't embarrasingly parallel, you know.
x265.readthedocs.io/en/default/threading.html#frame-threading
Entropy coding takes most of the time, anyway.

The board is the least of the issues. The chip, user, the damn chip. that square block in the middle of your NormiePi. That's not open-source and won't be because ARM is not open source.

I don't see why it couldn't be.
Split the video into several minute or second long parts encode each of them individually and stitch them together at the end.
The only two downsides would be that for one you couldn't reference any previous frames at the beginning of each segment, and on the other hand, you need a bit more disk space for the stitching (sum of all parts + size of the current part).

Encoders should definitely have this option, because unless you are streaming the encoded video it would definitely improve the performance.

I'm the OP. I was getting .5 frames per minute with a 4 core Haswell. I'm saying I believe them when they say you need 200 cores to get 24-30 frames encoded per second. Encode time is fucking unbelievably slow.

This is done anyway (each keyframe is a new boundary which can't be crossed by further reference frames, otherwise seeking wouldn't work with reasonable speed)
And good keyframe detection can be done easily, so it won't be a big efficiency loss.

Then your bottleneck is memory bandwidth. And entropy coding is still taking too much time.

I see. There wont be any optimization before the bitsream freeze, anyway. NONE.
Bonnet.

dead on arrival

In order to have real time 24fps encode on this hardware, they would need to get 120 times speed increase in 3 months. They've already stated hardware acceleration wont be around until the end of next year. If you've ever played around with VP9, you would know encode times were bad. The only bright side of all of this is that decoding isn't nearly as resource intensive. Otherwise AV1 would be DOA.

Any arguments to back that up?

Jewgle is greedy is one.

VP9 encode times aren't bad actually. Compare it to x265 and the only bad thing is the threading, not the actual computational power.

Common sense and doom9 lurking. Why do optimizations if you potentially will have to strip them? Plus, they're simply focused on getting gains at this point.

Compare it to something people actually use instead of a shekel-grabbing meme codec.

???