This is NOT my area of expertise, but there are multiple levels of decoding and encoding involved
And beware there is Video RAM (vRAM) on the GPU (graphics/video card)
From camera to computer
- for example, common for USB connected devices to compress video (and uncompressing often done in CPU, I believe).. though probably depends.
- there are multiple other connection technologies (HDMI, SGI, NDI, RTSP, etc) and video protocols (H.364, H.265, etc) which all come into play
and part of where vide decoding takes place depends on the specific driver involved on the receiving end (ex USB connected HDMI capture 'card')
Your other thread mentions connecting via different technology (USB connected device, native webcam, and capture device), so the getting the video onto the computer, and available for OBS Studio may not follow same processing path (CPU vs GPU)
Now, once OBS Studio has it as a Source, then there is both rendering/compositing and then Encoding for output to a Recording (or Stream)
and this is where I look to others for a better explanation
Ideally, those incoming video streams would be handed off to GPU Decode offload, and then any Recording/Streaming sent to GPU Encode offload. And here, nVIdia tends to work much better than AMD (due to AMD consciously choosing to under-invest in H.264 encoding with is what most streaming platforms use. For Recording, not streaming, using H.265 or AV1 the analysis is a bit different) Newer encoding protocols/standards take a LOT more computational power to enable the higher compression ratios.
As for Editing, it depends on which software you use. The 2 largest/best known would be Adobe Premier and DaVinci Resolve. Resolve is known to use the GPU more... so it depends. And higher resolution video takes more RAM (both motherboard/CPU connected, and vRAM) 4K video takes a lot more than 1080p... so how much... depends
A more powerful GPU makes video editing smoother, and rendering output faster... as a gross generality. BUT... how much you need depends on your patience level, how you value your time vs cash, and the editing software. For 1080p30 content, most modern system will be ok for non-professional/simple editing. A key consideration is the ability of the computer CPU, RAM, Disk I/O, and GPU to keep up/enable real-time editing (ie, no visible waiting for end-user on the editing software User Interface)... which is distinct from Rendering final output. Move up to 4K, and then hardware resource demands go up significantly. Recently 8K video is possible with latest mirrorless cameras and that TAKES a huge amount of modern processing (can 'cripple' a professional dedicated US$5-10K editing rig depending on exact usage, settings, details, etc)