I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.
I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.
Does youtube actually store copies of each one? Or does it store 1 master copy and downsaple as required in real time. Probably stores it since storage is cheaper than cpu time
I believe they store and that’s why it processes lowest res first and works up
It’s transposed on the fly, this is a fairly simple lambda function in AWS so whatever the GCP equivalent is. You can’t up sample potato spec, the reason it looks like shit is due to bandwidth and the service determining a lower speed than is available.
Are you suggesting they don’t store different versions? This (speculative ik) suggests they do.
That response is almost 10 years old and completely outdated. I’ve designed and maintained a national media service and can confirm that on the fly transcoding is both cheaper and easier. It does make sense to store different formats of videos that are popular at the minute but in the medium to long term streams are transcoded.
Sure it’s old but the stats I posted in a lower comment show that at YouTube’s scale, it makes sense to store.