Fractional scaling is ill-conceived (or I'm failing to grasp it)
What I expected: the typical oversampling trick: you force your toolkit to render everything at a higher integer scale (typically 2x) and then use xrandr to scale down by subsampling (it uses bicubic) by a potentially fractional factor. So you want 1.75: first set GTK and Qt to render at 2x, then use a scale of 2/1.75 in xrandr.
What I get: I want 1.75, set that in Display settings, it does nothing to increase underlying toolkit scale factor, simply sets the xrandr scale at 1.75. This is not what I want, not by a long shot: i. everything is smaller, not larger, 2. going the other way around (scale < 1) deteriorates rendering quality (the entire point of first upscaling and then downscaling is to have a decent output by producing more information before interpolating, so hopefully resulting blurriness is acceptable).
So how is this new setting helping me? I first need to manually force GTK and Qt to render at 2x by tweaking the environment, then I have to compute the right scaling factor to downscale using xrandr, which is not 1.75 but 2 / 1.75, then I have to enter it in the new Display settings dialog. Not really very helpful and extremely misleading.
I don't want to sound harsh at all, I just think you got this wrong, or there is another setting I'm missing. But oversampling is the standard way to go, it has been the xrandr trick forever, it's the way it's implemented in Ubuntu fractional scaling, it's the way it's implemented in GNOME/Wayland, even MacOS does it, the point being that underlying toolkits tend to have this pixel bias that makes it very hard to produce fractional output directly from them (Qt is an exception), so first you need to go up by an integer factor, then down using a good interpolation algorithm (bicubic arguably isn't the best for this use case, but it's the one that xrandr implements).