Resize a source window from the middle of the viewport

dgebel

New Member
1. Easier way to resize a Source window that is bigger than the viewport.

We frequently need to zoom/resize/crop the camera input to a size LARGER than the viewport - in other words, using digital zoom. The preview source box only lets you do a manual resize from the edge/corner, as far as I can tell.

*IS* there some way to resize without moving the source window to see the edge's resize widgets? Surely we're not the only ones that need to do this!

i.e. To make the source window a bit bigger or smaller, when the source window is larger than the viewport, we have to move/drag the video source box out to the side so you can see an edge or corner, then you can resize, but then you have to move it back to your desired view, and see if it's the right amount of zoon. Usually you have to rinse & repeat. ARRGG!

I'm thinking that there would be a resize widget in the centre of the visible part of the source box, so I can see the resizing as I'm doing it, while looking at the source or video itself. Bonus (for some people) would be a rotate widget. And/or a zoom dial or number box so we could specify exactly what size we want. Transform doesn't seem to have a scale numeric setting?

2. Why we're using an annoying, poor-quality, digital zoom... and looking for suggestions!

We have ONE 4k camera Sony Handycam with an HD capture card (ok and an HD webcam). We don't have a dedicated camera operator. So, we set up the camera optical zoom to fit the whole stage for the first half of the program, and then resize the video window at least 2x bigger than the viewport - effectively digitally zoom - so you can "clearly" see people talking on different parts of the stage.

Then, halfway through, we use the optical zoom to get close in on the main speaker, who doesn't usually move around a lot, thankfully, and stream the rest of the day at full optical zoom.

Is this is going to be the best quality we can get? We're streaming at 720p and the video is coming in at 1080p, if the source window is 2x the size of the viewport, is there any scaling going on?

To hide some of the downgraded quality (and to ensure the computer isn't overtaxed, although I think we would be okay with our upgraded computer), we stream at 720p instead of the captured 1080p of the camera input, i.e. so the digital zoom .

At some point, we're more likely to get a 4k capture card than another camera, like a PTZ etc. (cost) which would let us do a lower level of zoom, if at all. Basically, we would be downsizing the video input from 4K to 720p, effectively giving us a full 4X optical zoom. I hope. Is that crazy?

Capturing and streaming at native resolution would be better, I know, but it's what we have to work with for now. Any suggestions for better quality appreciated!
 

AaronD

Active Member
It would probably help immensely if you could somehow motorize the optical zoom, which would probably also require a motorized tripod. And at that point, you'd effectively have a PTZ camera, even if the gear itself wasn't originally designed to be.

How good are you or your team at DIY'ing something like that? It wouldn't necessarily have to be software-controlled...

(I remember as a kid, connecting two Lego motors with a long wire, putting a crank on one, and watching the other one run with considerable force. If you actually do use Lego for this, you'll have to to use the older, "dumb" motors, not the newer ones that have smarts built-in. The "4-stud cube" kind might be about right, with metal-cornered studs where the Lego wire snaps onto it.)

---

For the scaling question, yes, it's practically guaranteed to scale. Only if you *exactly* match the pixel size and align the two grids, will it not.

But it probably doesn't matter either, because the encoder wrecks most of what a "pixel" means anyway. It's a lot more effective to encode frequency information, across the frame and across time, and the pixels at the other end are reconstructed from whatever frequency information doesn't get thrown away.

That actually lines up fairly well with how we see anyway. We don't actually see pixels, unless we're looking for them. We see patterns, which are changes across time and space, and that's precisely what the frequency domain describes.

This audio demonstration might be enlightening, if you're unsure about spacial resolution less than one pixel, or time resolution less than one frame:
The same rules apply to any digitally-sampled thing, including video, and to any dimension of sampling, not just time.
 
Last edited:
Top