If you’re debugging the internals of nanomsg, then you need to spend some time learning the asynchronous architecture it uses under the hood. Otherwise, if you’re just debugging your application, step *over* the function calls rather than through them. I can’t really help you further here, since I don’t do Windows. - Garrett > On Mar 9, 2015, at 5:08 AM, Ranier VF <ranier_gyn@xxxxxxxxxxx> wrote: > > Hi, > I noticed this, with windbg (Microsoft Debugger), step by step (F8 step into). > > With IPC, is possible debug, but on call nn_close, debugger (F10 step over) > jump to another thread (main?) and is and was no longer possible continue > with debug on app function. > > With TCP, is not possible debug with windbg, on call nn_recv, debugger stop > with > Access Violation (WSPSend), but in runtime works well, or without GPF. > > Best regards, > > Ranier Vilela > > From: garrett@xxxxxxxxxx > Subject: [nanomsg] Re: Questions nn functions inside thread > Date: Thu, 5 Mar 2015 13:34:12 -0800 > To: nanomsg@xxxxxxxxxxxxx > > > On Mar 5, 2015, at 12:24 PM, Ranier VF <ranier_gyn@xxxxxxxxxxx > <mailto:ranier_gyn@xxxxxxxxxxx>> wrote: > > Hi, > > Socket: PAIR > > Protocol: IPC > nn_connect inside a thread works, and flow continue to next line. > > Protocol: TCP > nn_connect inside a thread exit thread, and flow continue on main thread > > Is by design? > > The use of threads by libnanomsg is an implementation detail. The behavior > of your application and the API should be that nn_connect() operates > asynchronously. There may be some synchronous handling (for IPC this is > easiest, for example, because you don’t have to wait for a network), but you > should not rely on that. The physical transport may or may not be connected > when nn_connect() returns. That *is* by design. > > > Protocol: IPC > nn_close inside a thread exit thread! > Is correct? > > *That* seems weird. Are you sure? I don’t think it calls thr_exit(). > Although — if you call it from within one of the nn_xxx functions, that might > be expected. > > > - Garrett > > Best regards, > > Ranier Vilela