If you mean Spotlight (which it looks like based on that image), then AFAIK it is designed to basically punch right through your screen to transfer the color information from Spotlight’s visible pixels (and only those pixels) onto the model. In other words Spotlight is not using the image’s raw information all by itself, but is instead looking at what you can actually see of that image on your monitor’s pixels (or perhaps the document’s pixols) and it projects that information.
Example: If you have a massive 8K image loaded into Spotlight but sized so that you can fit the entire thing on the screen, then you’re not going to be projecting 8k worth of pixel details because all those image’s pixels are not actually visible on your screen. The image is being downsized by spotlight to fit on your screen, and what you see is what you’ll get from the projection.
In a sense, the model scale works the same way. Even if the model has a perfect density of vertices to support all the required detail, if it is scaled in a way that one pixel from the Spotlight image covers half of the points on the model then all those points are going to wind up having the same color information projected onto them.
So ideally you would want to aim for a 1:1 ratio between the two for a pixel-perfect projection. There’s no shortcut where you can zoom out and paint an entire high res character in one stroke. If you’re doing a full character then you’ll probably want to start from a distance to lay down a blurred overall sense of the coloring, and then move in closer with each pass to get the finer / sharper details (sort of like sculpting and painting in general where you do the large forms first before focusing on the smaller details).