PikaChokeMe
New Member
I mean this as more of like a technical question...
not just like a... once upon a time in a YouTube video some guy said said always use my desktop resolution so that's the end solution.
I have a 3440x1440 ultrawide monitor, but I basically never stream at the 3440x1440 native resolution. I usually stream at a 1720x720 video output resolution because that's 50% of my regular resolution and some form of a "720p", I guess. I do play games at a 3440x1440 resolution, but I also have a vtuber application running.
Currently I'm running my games at 3440x1440 desktop resolution and capture that window.
Then I run a vtuber application at 1600x900 and capture that window as well.
I then also have some graphics and overlays which I have laid out on my canvas, and my guess is somehow this all goes through some nebulous process of being smashed together, and then gets downscaled using a bicubic filter.
I'm thinking alternatively what I can do is:
Set base canvas resolution to 1720x720.
Capture and resize one game window by 50% in the scene editor (I'm not sure how this actually gets rescaled or how expensive this is comparatively)
Capture and run one vtuber application at 960x540 and output this at it's "base resolution".
Capture a bunch of smaller graphics and overlays at their "base resolution".
Save computation costs on downscaling or upscaling filters since base canvas and output resolution is the same?
I'm thinking in the case where I'm using 720p as my base and output resolution, not only are a lot of my assets smaller, but I'm also not rescaling as much stuff as a whole, and in some cases I'm not making bigger graphics or capturing other windows at a higher resolution just to scale them back down to a smaller resolution. To my human brain that knows almost nothing about how the inner workings of OBS rescales stuff, this seems like it should be more efficient and technically the less computationally expensive option, but everywhere I've looked everything just says, "Set your base canvas to your desktop resolution."
Can someone give me more insight on to the technical details of how this works or if my logic makes sense. Is my method or idea more efficient? Or is it actually still better to just use my desktop as my base canvas size and downscale?
not just like a... once upon a time in a YouTube video some guy said said always use my desktop resolution so that's the end solution.
I have a 3440x1440 ultrawide monitor, but I basically never stream at the 3440x1440 native resolution. I usually stream at a 1720x720 video output resolution because that's 50% of my regular resolution and some form of a "720p", I guess. I do play games at a 3440x1440 resolution, but I also have a vtuber application running.
Currently I'm running my games at 3440x1440 desktop resolution and capture that window.
Then I run a vtuber application at 1600x900 and capture that window as well.
I then also have some graphics and overlays which I have laid out on my canvas, and my guess is somehow this all goes through some nebulous process of being smashed together, and then gets downscaled using a bicubic filter.
I'm thinking alternatively what I can do is:
Set base canvas resolution to 1720x720.
Capture and resize one game window by 50% in the scene editor (I'm not sure how this actually gets rescaled or how expensive this is comparatively)
Capture and run one vtuber application at 960x540 and output this at it's "base resolution".
Capture a bunch of smaller graphics and overlays at their "base resolution".
Save computation costs on downscaling or upscaling filters since base canvas and output resolution is the same?
I'm thinking in the case where I'm using 720p as my base and output resolution, not only are a lot of my assets smaller, but I'm also not rescaling as much stuff as a whole, and in some cases I'm not making bigger graphics or capturing other windows at a higher resolution just to scale them back down to a smaller resolution. To my human brain that knows almost nothing about how the inner workings of OBS rescales stuff, this seems like it should be more efficient and technically the less computationally expensive option, but everywhere I've looked everything just says, "Set your base canvas to your desktop resolution."
Can someone give me more insight on to the technical details of how this works or if my logic makes sense. Is my method or idea more efficient? Or is it actually still better to just use my desktop as my base canvas size and downscale?