Flip the problem on its head. Where does it say in the docs that full table
scans must be reduced?
Obviously, full table scans are powerful and are the best way of getting
lots of data quickly if your filters are highly selective (return many
Using an index is plain stupid if the filter is highly selective, it is
crazy to want to visit many blocks one by one, using a fresh buffer in the
cache as you go.
If your dinasaur doesn’t agree that something that is faster and uses less
latches is better then maybe they’re really an osterich?
Or maybe they’re working to achieve a different set of metrics?