[irqbalance] Re: [PATCH 1/5] irqbalance: function dump_topo_obj renamed to dump_cpu_obj

  • From: Petr Holasek <pholasek@xxxxxxxxxx>
  • To: Neil Horman <nhorman@xxxxxxxxxxxxx>
  • Date: Thu, 19 Mar 2015 11:49:53 +0100

On Wed, 18 Mar 2015, Neil Horman <nhorman@xxxxxxxxxxxxx> wrote:
> On Wed, Mar 18, 2015 at 03:57:54PM +0100, Petr Holasek wrote:
> > Fixed confusing generic function name.
> > 
> But a topo_obj isn't always a cpu.  It can be a cpu a cache domain, a package
> (group of cpus), or a numa node (a group of packages).  I'm not opposed to
> better naming here, but I don't think cpu is the right one.  perhaps
> dump_balance_obj?
> 
> Neil
> 

Agreed, dump_balance_obj is better.

> 
> 
> > Signed-off-by: Petr Holasek <pholasek@xxxxxxxxxx>
> > ---
> >  cputree.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/cputree.c b/cputree.c
> > index cfa70b6..76b3e2d 100644
> > --- a/cputree.c
> > +++ b/cputree.c
> > @@ -346,7 +346,7 @@ static void dump_irq(struct irq_info *info, void *data)
> >         info->irq, irq_numa_node(info)->number, classes[info->class], 
> > (unsigned int)info->load);
> >  }
> >  
> > -static void dump_topo_obj(struct topo_obj *d, void *data 
> > __attribute__((unused)))
> > +static void dump_cpu_obj(struct topo_obj *d, void *data 
> > __attribute__((unused)))
> >  {
> >     struct topo_obj *c = (struct topo_obj *)d;
> >     log(TO_CONSOLE, LOG_INFO, "%s%s%s%sCPU number %i  numa_node is %d (load 
> > %lu)\n",
> > @@ -364,7 +364,7 @@ static void dump_cache_domain(struct topo_obj *d, void 
> > *data)
> >         log_indent, log_indent,
> >         d->number, cache_domain_numa_node(d)->number, buffer, (unsigned 
> > long)d->load);
> >     if (d->children)
> > -           for_each_object(d->children, dump_topo_obj, NULL);
> > +           for_each_object(d->children, dump_cpu_obj, NULL);
> >     if (g_list_length(d->interrupts) > 0)
> >             for_each_irq(d->interrupts, dump_irq, (void *)10);
> >  }
> > -- 
> > 2.1.0
> > 
> > 

-- 
Petr Holasek
pholasek@xxxxxxxxxx

Other related posts: