I'm not smart enough to verify the accuracy of this claim, nor exactly what the implications are, but it seems like it might improve performance if fixed.

  • AernaLingus [any]
    ·
    1 year ago

    There's a variable that contains the number of cores (called cpus) which is hardcoded to max out at 8, but it doesn't mean that cores aren't utilized beyond 8 cores--it just means that the scheduling scaling factor will not change in either the linear or logarithmic case once you go above that number:

    code snippet
    /*
     * Increase the granularity value when there are more CPUs,
     * because with more CPUs the 'effective latency' as visible
     * to users decreases. But the relationship is not linear,
     * so pick a second-best guess by going with the log2 of the
     * number of CPUs.
     *
     * This idea comes from the SD scheduler of Con Kolivas:
     */
    static unsigned int get_update_sysctl_factor(void)
    {
    	unsigned int cpus = min_t(unsigned int, num_online_cpus(), 8);
    	unsigned int factor;
    
    	switch (sysctl_sched_tunable_scaling) {
    	case SCHED_TUNABLESCALING_NONE:
    		factor = 1;
    		break;
    	case SCHED_TUNABLESCALING_LINEAR:
    		factor = cpus;
    		break;
    	case SCHED_TUNABLESCALING_LOG:
    	default:
    		factor = 1 + ilog2(cpus);
    		break;
    	}
    
    	return factor;
    }
    

    The core claim is this:

    It’s problematic that the kernel was hardcoded to a maximum of 8 cores (scaling factor of 4). It can’t be good to reschedule hundreds of tasks every few milliseconds, maybe on a different core, maybe on a different die. It can’t be good for performance and cache locality.

    On this point, I have no idea (hope someone more knowledgeable will weigh in). But I'd say the headline is misleading at best.