Overview
Examples
Screenshots
Comparisons
Applications
Download
Documentation
Tutorials
Bazaar
Status & Roadmap
FAQ
Authors & License
Forums
Funding Ultimate++
Search on this site
Search in forums












SourceForge.net Logo
Home » Community » Coffee corner » Thoughts about alternative approach to multithreading
Thoughts about alternative approach to multithreading [message #18642] Tue, 14 October 2008 01:10 Go to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

Recently I`ve met description of Erlang language approache to make understandable and debuggable multithreading (MT) in applications. In short, they avoid many difficulties with MT by proposing single and unified multithreading technique.

Each thread has it`s queue and variables. Threads see each other but they don`t see each others` variables. The only way for interaction is exchanging with events. Each event is placed into thread local queue and processed consequently.

No (public) critical sections, no deadlocks, no synchronization issues. No headache. No tricky debug.

IMO this makes sense. Recently developing MT-based classes I`ve come to the same approach. And it proved to be very stable. And almost as effective as "classic" synchronization objects.

What do you think about that? Maybe we should change Thread class development vector a little? Should we deny any Mutex/Semaphore support in favor of thread local queue member functions?
Re: Thoughts about alternative approach to multithreading [message #18643 is a reply to message #18642] Tue, 14 October 2008 08:27 Go to previous messageGo to next message
mr_ped is currently offline  mr_ped
Messages: 825
Registered: November 2005
Location: Czech Republic - Praha
Experienced Contributor
This makes perfect sense to me as long as you don't want to share memory(variables) between threads (for performance reasons for example).

I think there's no point to limit U++ library just to queue approach and force you to work around in other cases.
But there may be some point to layer the MT API, so if you wish to use just thread variables+queue, you will easily find proper subset of U++ API to do just this.

Is there some reason why this subset can't live along full mutex/etc. stuff in the same library?
Re: Thoughts about alternative approach to multithreading [message #18657 is a reply to message #18643] Wed, 15 October 2008 10:53 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

mr_ped wrote on Tue, 14 October 2008 10:27

Is there some reason why this subset can't live along full mutex/etc. stuff in the same library?

The only reason I can think of is lure of mixing two approaches in one application. This would lead to many problems, just like mixing procedural (C) and object oriented (C++) approaches in one program.
Re: Thoughts about alternative approach to multithreading [message #18658 is a reply to message #18657] Wed, 15 October 2008 11:59 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Wed, 15 October 2008 04:53

mr_ped wrote on Tue, 14 October 2008 10:27

Is there some reason why this subset can't live along full mutex/etc. stuff in the same library?

The only reason I can think of is lure of mixing two approaches in one application. This would lead to many problems, just like mixing procedural (C) and object oriented (C++) approaches in one program.


Well, but I am afraid that you cannot take away primitives from the library just like you cannot take 'C' from 'C++' and retain its ability to deal with everything.

Anyway, the concept itself seems quite interesting. I think it is worth pursuing. What a pity I have already finished my current grand MT application (and done it the 'hard way' Smile

BTW, do you think it would be possible to use this somehow instead of 'CoWork' in 'reference/CoWork' example? I have my doubts... IMO the performance would suffer.

Mirek
Re: Thoughts about alternative approach to multithreading [message #18671 is a reply to message #18658] Wed, 15 October 2008 21:49 Go to previous messageGo to next message
zsolt is currently offline  zsolt
Messages: 693
Registered: December 2005
Location: Budapest, Hungary
Contributor
In my current project we (the developers on the project) are using a a queuing framework.

Our experience is, that it is very easy to work with threads in an environment like this. It is very easy for beginners also.

We use UnitTest++ very intensively, and this aproach makes it very easy to create testable classes.

In our implementation, we don't use the old message id based design (used mostly in old embedded systems), as it needs a lot of administration. We use very hard template code. The logic of using the framework is very similar to the Callback system of U++.
We have special Callback-like template classes, but we use it with reversed logic (described a little bit later).

We have Tasks which are basically Thread classes with their own FIFOs. They are waiting for their FIFOs. The incoming items in the FIFO are CallbackXMethodAction like objects.

The logic is reversed if you compare it to U++'s Callbacks, as these Callback like objects are connected mostly to a method of the the same object they are in.

So this Callback like objects are callable from any threads. They are creating a CallbackXMethodAction like object on call and put it their Task's FIFO.
The Task reads this CallbackXMethodAction like object on its own thread and executes the method.

[Updated on: Wed, 15 October 2008 21:50]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #18673 is a reply to message #18671] Wed, 15 October 2008 22:53 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

I think QueueThread class should support classic U++ Callbacks. Because from the hierarchical point of view thread is a simple worker that should know nothing about executing code specifics.
Also I think that adding event should use pick behaviour heavily, so adding new event (with it`s data) we be very "cheap" operation.

I do use this approach for some time in undustrial automation projects and it proved to be stable and predictive. Recently I`ve posted archive with queued thread class (ConveyorThread). Archive also contains RS232Thread class derived from ConveyorThread with documentation.
More I think of it, more I want to switch from classic synchronization objects to queued threads.
Using brain Razz , templates, picking and U++ containers should make this as effective as simple Mutexes.
Re: Thoughts about alternative approach to multithreading [message #18681 is a reply to message #18673] Thu, 16 October 2008 11:38 Go to previous messageGo to next message
zsolt is currently offline  zsolt
Messages: 693
Registered: December 2005
Location: Budapest, Hungary
Contributor
Yes, I think your ConveyorThread would be a very good starting point of a good MT framework.
Re: Thoughts about alternative approach to multithreading [message #18682 is a reply to message #18681] Thu, 16 October 2008 13:48 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

For some time I was thinking about Mirek`s question on CoWork. And couldn`t find how to apply queue approach to Reference/CoWork example natively. The only way I think of is QueueCoWork as pool manager for QueueThread objects which decides which Thread should execute next action depending on their business (i.e. queue lengths). On highest hierarchy level QueueCoWork is received PaintLines message from main thread`s Paint. This event executes main class callback function DoRepaint which manages pooling of low-level callbacks painting the lines. IMO, looks quite ugly:
void QueueCoWork::Manage(Callback1 &cb)
{
    int mostFreeThread = -1;
    //find most free (unbusy) Thread

    queueThreads[mostFreeThread] << cb;
}

void QueueCoWork::ManageTypical(Callback1 &cb)
{
    //simply rotate through threads to average tasks count
    lastUsedThread = (++lastUsedThread) % queueThreads.GetCount();

    queueThreads[lastUsedThread] << cb;
}

//----------------------------------------------------
void MainWindow::OnPaint
{
    queueCoWork << THISBACK(PaintLines);
}

void MainWindow::PaintLines()
{
    for (int y=0; y<height; ++y)
        queueCoWork.ManageTypical(THISBACK1(PaintLine, y));
}
void MainWindow::PaintLine(int y)
{
    //paint the y-th line
}

[Updated on: Thu, 16 October 2008 13:50]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #18684 is a reply to message #18682] Thu, 16 October 2008 15:04 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Thu, 16 October 2008 07:48

For some time I was thinking about Mirek`s question on CoWork. And couldn`t find how to apply queue approach to Reference/CoWork example natively. The only way I think of is QueueCoWork as pool manager for QueueThread objects which decides which Thread should execute next action depending on their business (i.e. queue lengths). On highest hierarchy level QueueCoWork is received PaintLines message from main thread`s Paint. This event executes main class callback function DoRepaint which manages pooling of low-level callbacks painting the lines. IMO, looks quite ugly:
void QueueCoWork::Manage(Callback1 &cb)
{
    int mostFreeThread = -1;
    //find most free (unbusy) Thread

    queueThreads[mostFreeThread] << cb;
}

void QueueCoWork::ManageTypical(Callback1 &cb)
{
    //simply rotate through threads to average tasks count
    lastUsedThread = (++lastUsedThread) % queueThreads.GetCount();

    queueThreads[lastUsedThread] << cb;
}

//----------------------------------------------------
void MainWindow::OnPaint
{
    queueCoWork << THISBACK(PaintLines);
}

void MainWindow::PaintLines()
{
    for (int y=0; y<height; ++y)
        queueCoWork.ManageTypical(THISBACK1(PaintLine, y));
}
void MainWindow::PaintLine(int y)
{
    //paint the y-th line
}



IMO even worse, "PaintLine" equivalent in more complex example might need Mutex.... It is not needed in the example, because algorithm never thouches the same data. But more general case might need to work with some shared data...

Actually, I believe that working with shared data, while inherently bug-prone, is where you can gain some performance using MT...

Mirek
Re: Thoughts about alternative approach to multithreading [message #19130 is a reply to message #18684] Fri, 14 November 2008 10:09 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

OK, I`d like to inform on some progress on "alternative" multithreading. It looks like I finally managed to figure how to apply it to U++ correctly (even in previous example). I`ll describe it as soon as class is ready and tested.

For now I`ve made little investigation on callbacks efficiency as I`d like to use them in my implementation. Some simple test was made:
#include <Core/Core.h>
#include <math.h>

using namespace Upp;

class TestClass
{
public:
	void func(int a, int b) 
	{
		int c = a + b*b; 
		double xx = sin(a+b*1.34-3.14159252596428);
		double yy = cos((double) (b-a));
		x += ceil(xx*yy*(c-(b << 4)+(a*b)));
	}
	
private:
	int x;
};

CONSOLE_APP_MAIN
{
	TestClass tc;

	Callback2<int,int> cb = callback(&tc, &TestClass::func);
	dword cnt = 0;
	for (int j=0; j<50; ++j)
	{
		Cout() << FormatInt(j) << " ";
		dword t1 = GetTickCount();
		for (int i=0; i<2000000; ++i)
		{
			cb(100,500);
			//tc.func(100, 500);
		}
		cnt += GetTickCount()-t1;
		
		Sleep(1500);
	}
	
	Cout() << "\n" << FormatInt(cnt/j);
}


Averaged values were calculated for three cases:
1) Plain call
tc.func(100, 500);
inside loop

2) Callback call
cb(100,500);
inside loop

3) Callback stack variable creation+call+destruction
Callback2<int,int> cb = callback(&tc, &TestClass::func);
cb(100,500);

inside loop

On my PC I have these values:
1) 594
2) 592
3) 728

Conclusion: U++ callbacks are very efficient. Even for simple functions it takes very small tradeoff to use callbacks instead of plain member function calls. Creating and destroying of callback stack variable takes some time. Not very big though but it is better to avoid it.

Investigation continues.
Re: Thoughts about alternative approach to multithreading [message #19131 is a reply to message #19130] Fri, 14 November 2008 10:15 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Fri, 14 November 2008 04:09


Creating and destroying of callback stack variable takes some time. Not very big though but it is better to avoid it.

Investigation continues.


It involves MT synchronization primitive (atomic counter inc/dec) - that is the cause of slowdown.

Mirek
Re: Thoughts about alternative approach to multithreading [message #19181 is a reply to message #19131] Mon, 17 November 2008 16:41 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

Mirek, thank you for this important notice.

I keep working on "alternative" threading model and have some news. The thing I work on is making posting callback to queue as quick as possible.
Let`s remind previous result: plain function call took ~600 msecs of averaged execution time. If you create callback with arguments, execute it and destroy inside the cycle:
		for (...)
		{
			Callback cb = callback2(&tc, &TestClass::func, 100,500);
			cb();
		}

you get averaged execution time ~780 msecs.

To make comparison more fair, I emulate adding callback to queue, executing it and then removing it from queue:
	BiVector<Callback> cbv;
	cbv.Reserve(100);
		for (...)
		{
			cbv.AddTail(callback2(&tc, &TestClass::func, 100,500));
			cbv.Head().Execute();
			cbv.DropHead();
}

Testing this code gave execution time of ~820 msecs.

OK, what does make this difference between plain call and posting a callback? 1) Yes, creating/destroying of Atomic member inside callback. 2) Creation of callback object itself along with it`s internal member with virtual functions and more. Also we must keep in mind that thread will have very limited number of public callback types (as a rule). Not hundreds. Most likely something about 10 or even smaller.
What if I avoid using complex Callback object and use something simpler instead? What if I have a queue for each callback type where only arguments are stored?
It took me a number of days of thinking and a pair of dirty tricks to do it. Finally I came to something like quick queued class prototype:
class AThreaded
{
public:
	AThreaded()
	{
		args.SetCount(0xFF+1); //yes, simple array+"hash" instead of Index. that is because Index` elements are constant
	}
	
	template<class OBJECT, class P1, class P2>
	void RequestAction(void (OBJECT::*m)(P1,P2), const P1 &p1, const P2 &p2)
	{
		typedef void (OBJECT::*Func)(P1,P2);
		struct Args : public Moveable<Args>
		{
			P1   p1;
			P2   p2;
		};

		//using method pointer as hash value. notice that method`s pointer size may be >= plain (void *)
		int methodPtrSize = sizeof(m) / sizeof(unsigned);
		unsigned *cur     = (unsigned *) (&m);
		unsigned hashV    = 0;
		for (int i=0; i<methodPtrSize; ++i, ++cur) hashV+=*cur;
		hashV &= 0xFF;
		
		int argsI = hashV;//args[hashV];
		if (args[argsI].IsEmpty())
		{
			//creating arguments queue for new callback
			Any aa;
			aa.Create< BiVector<Args> >();
			args[hashV] = aa;
			args[argsI].Get< BiVector<Args> >().Reserve(100);
		}
		
		Args newArgs;
		newArgs.p1 = p1;
		newArgs.p2 = p2;
		
		//just emulating add+execute+drop
		BiVector<Args> &curArgsQueue = args[argsI].Get< BiVector<Args> >();
		curArgsQueue.AddTail(newArgs);
		Args &curArgs = curArgsQueue.Head();
		(((OBJECT *) this)->*m)(curArgs.p1, curArgs.p2);
		curArgsQueue.DropHead();
	}
		
protected:
private:
	Array<Any> args;
};

And execution time is... ~640 msecs. This is almost as fast as plain function call which took 600 msecs instead of 840 msecs while using classic U++ callbacks.
More of that, posting callback looks rather nice for user:
class TestClass : public AThreaded {...};
TestClass tc;

tc.RequestAction(&TestClass::func, 100, 500);


I would appreciate any feedback, particularly comments on potential problems with this code.

Investigation on "alternative" threading model continues.
Re: Thoughts about alternative approach to multithreading [message #19184 is a reply to message #19181] Tue, 18 November 2008 00:53 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

Finally I finished new version of "alternative" threading class based on approach I described above. It should be times faster than classic callback queue. The idea of class is making asynchronous messaging between threads as close to plain function call as possible.
To describe how much overhead does it make to use asynchronous messaging with ACallbackThread class, I`ve made series of tests and generated approximate plot of overhead versus a "density" of events. Please keep in mind that tests where executed under a little prehistoric CPU AMD 2GHz, single core, DDR RAM.
...

At the moment I don`t know if it is good or bad results. But as for first glance, I would say that IMO most applications will generate no more than 200 events per second, so it will make overhead under 2% even for old CPUs. Maybe it all is finally worth efforts of not only making async queue machanizm but optimizing callbacks queue itself.
  • Attachment: acb_jobs.jpg
    (Size: 149.04KB, Downloaded 580 times)

[Updated on: Tue, 27 January 2009 00:17]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #19185 is a reply to message #19181] Tue, 18 November 2008 09:05 Go to previous messageGo to next message
mr_ped is currently offline  mr_ped
Messages: 825
Registered: November 2005
Location: Czech Republic - Praha
Experienced Contributor
I'm sort of newcomer in terms of MT in high level language, so excuse me if I have some stupid questions (and I'm half asleep too) ... but anyway, you did want feedback. Very Happy



Mindtraveller wrote on Mon, 17 November 2008 16:41


class AThreaded
{
public:
	AThreaded()
	{
		args.SetCount(0xFF+1); //yes, simple array+"hash" instead of Index. that is because Index` elements are constant
	}
	
	template<class OBJECT, class P1, class P2>
	void RequestAction(void (OBJECT::*m)(P1,P2), const P1 &p1, const P2 &p2)
	{
		typedef void (OBJECT::*Func)(P1,P2);
		struct Args : public Moveable<Args>
		{
			P1   p1;
			P2   p2;
		};

		//using method pointer as hash value. notice that method`s pointer size may be >= plain (void *)
		int methodPtrSize = sizeof(m) / sizeof(unsigned);
		unsigned *cur     = (unsigned *) (&m);
		unsigned hashV    = 0;
		for (int i=0; i<methodPtrSize; ++i, ++cur) hashV+=*cur;
		hashV &= 0xFF;
		
		int argsI = hashV;//args[hashV];
		if (args[argsI].IsEmpty())
		{
			//creating arguments queue for new callback
			Any aa;
			aa.Create< BiVector<Args> >();
			args[hashV] = aa;
			args[argsI].Get< BiVector<Args> >().Reserve(100);
		}
		
		Args newArgs;
		newArgs.p1 = p1;
		newArgs.p2 = p2;
		
		//just emulating add+execute+drop
		BiVector<Args> &curArgsQueue = args[argsI].Get< BiVector<Args> >();
		curArgsQueue.AddTail(newArgs);
		Args &curArgs = curArgsQueue.Head();
		(((OBJECT *) this)->*m)(curArgs.p1, curArgs.p2);
		curArgsQueue.DropHead();
	}
		
protected:
private:
	Array<Any> args;
};

And execution time is... ~640 msecs. This is almost as fast as plain function call which took 600 msecs instead of 840 msecs while using classic U++ callbacks.
More of that, posting callback looks rather nice for user:
class TestClass : public AThreaded {...};
TestClass tc;

tc.RequestAction(&TestClass::func, 100, 500);




* You don't do anything in case two callback functions have same hash. I'm not sure how do you want to handle that and if you already solved it somehow.
The first thing which came to my mind when I tried to solve this was:
- store with every parameter queue the original pointer. If for new call the hash is same, but pointer different, call the action immediately. (i.e. the second action routine will be never postponed into queue)
- have several hash functions available, in case of collision try different one, if it has no collision, "rehash" whole queue table and continue with the different hash function.


* is "curArgsQueue.AddTail(newArgs);" (and the rest of them) atomic? Or the RequestAction can't be called concurrently for the same hash (function)?

* I was unable to copy/paste that piece of code and make it work. (should it work with 2008.1 + MINGW? Can you post some test package?)

* P1 and P2 must be moveable, right?
Re: Thoughts about alternative approach to multithreading [message #19188 is a reply to message #19184] Tue, 18 November 2008 20:05 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Mon, 17 November 2008 18:53


But as for first glance, I would say that IMO most applications will generate no more than 200 events per second, so it will make overhead under 2% even for old CPUs.



Some perspective:

My MT server (the application I have developed this year, it is the backend for community website) is able to handle 10000 requests per second (on quadcore CPU). Every such request would generate tens or even hundreds 'events' in your model.

Also, I believe that all event queues will have to be synchronized anyway, something that your current measurements do not account for....

Mirek

[Updated on: Tue, 18 November 2008 20:09]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #19189 is a reply to message #19188] Wed, 19 November 2008 01:45 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

Thank you for your attention to this topic.

* You don't do anything in case two callback functions have same hash.
--- Yes, and this is the problem for now. I`d like to use HashBase as hash table (it would solve this problem) but it is not documented and is rather a black box for me when I compare my plain realization and HashBase class efficiency.

* is "curArgsQueue.AddTail(newArgs);" (and the rest of them) atomic? Or the RequestAction can't be called concurrently for the same hash (function)?
* I was unable to copy/paste that piece of code and make it work. (should it work with 2008.1 + MINGW? Can you post some test package?)

--- Certainly, because I posted something like petrified version of the original code, just to discuss an idea itself.

* P1 and P2 must be moveable, right?
--- I keep thinking about this requirement. This is crucial for overall efficiency, but non-movable parameters should work also< I suppose...

* Also, I believe that all event queues will have to be synchronized anyway, something that your current measurements do not account for....
--- I mentioned before that actual tests were made for working class, not a simple & rather abstract example I`ve shown.

* My MT server (the application I have developed this year, it is the backend for community website) is able to handle 10000 requests per second (on quadcore CPU). Every such request would generate tens or even hundreds 'events' in your model.
--- I believe it is time to make some comparison. May be we should choose some reference example to be rewritten using alt. threading. It will give some comparisons between classic and alternative approach efficiency. Also, may be CoWork reference example is not the best choice to compare efficiency here? IMO it contains some functionality not covered by Thread alternative (if alt. approach will be proved to be worthy, I will make sense to make CoWork alternative, but not earlier). May be we should choose some comparison example wich is closer to "pure" threading? Or may be I`m wrong and CoWork reference example is just what we need to start comparison with? What do you think?

[Updated on: Wed, 19 November 2008 01:58]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #19195 is a reply to message #19189] Wed, 19 November 2008 10:39 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Tue, 18 November 2008 19:45


May be we should choose some comparison example wich is closer to "pure" threading?



What about simple "shared cache"?

I am not sure how it would look in your model, but 'normal' should be simple:

StaticMutex               mutex;
VectorMap<String, String> data;

void SetData(const String& key, const String& value)
{
    INTERLOCKED(mutex)
           data.GetAdd(key) = value;
}

String GetData(const String& key)
{
    INTERLOCKED(mutex)
           return data.Get(key, Null);
}


Mirek
Re: Thoughts about alternative approach to multithreading [message #19197 is a reply to message #19195] Wed, 19 November 2008 13:20 Go to previous messageGo to next message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

luzr wrote on Wed, 19 November 2008 12:39

StaticMutex               mutex;
VectorMap<String, String> data;

void SetData(const String& key, const String& value)
{
    INTERLOCKED(mutex)
           data.GetAdd(key) = value;
}

String GetData(const String& key)
{
    INTERLOCKED(mutex)
           return data.Get(key, Null);
}

What I see here is simply an atomic access to data container.
And it reveals big difference between these two approaches. It is like C and C++. C++ is based on "fully autonomous" objects with a limited number of public methods. Public methods are like high-level messages to the object which is intellectual enough to process them. To keep maintainability, everything should be hidden inside an object. There is still (limited) number of cases where object`s public method is just getting or setting variable. Contrary, Getter/Setter functions commonly are an evil, because usually it equals to making class member public.
...of course, you know it well.

Alternative approach introduces CallbackThread flavour for classes. It adds "multithreading" capability. You still use objects`s public methods, which are now called slightly differently and are processed in the background. From OO perspective you keep classes maintainable since neither public member variables nor Getters/Setters are introduced.
This is contrary to classic MT which commonly uses "shared" variables paired with synchronization objects.
So you can`t compare plain Getter/Setter because well designed objects mustn`t have ones (excluding a number of rare cases described above). You should compare real-world examples, where you don`t just set or get variable, but make some processing with it. In a real-world application you don`t just add to object`s container. Instead you add something to container and do some useful work with it. We have to consider alternative approach is not about handling variables, but doing things.

[Updated on: Wed, 19 November 2008 13:24]

Report message to a moderator

Re: Thoughts about alternative approach to multithreading [message #19201 is a reply to message #19197] Wed, 19 November 2008 19:38 Go to previous messageGo to next message
mirek is currently offline  mirek
Messages: 13975
Registered: November 2005
Ultimate Member
Mindtraveller wrote on Wed, 19 November 2008 07:20

You should compare real-world examples



http://en.wikipedia.org/wiki/Memcached

Mirek
Re: Thoughts about alternative approach to multithreading [message #19203 is a reply to message #19201] Thu, 20 November 2008 10:19 Go to previous messageGo to previous message
Mindtraveller is currently offline  Mindtraveller
Messages: 917
Registered: August 2007
Location: Russia, Moscow rgn.
Experienced Contributor

If I understand your example right...
class CachedSQL : public CallbackThread
{
public:
	void GetDataGiveToAnswerer(String request)
	{
		int rqIndex = data.Find(request);
		bool needAdd = false;
		String answer;
		if (rqIndex < 0)
		{
			answer = FetchFromDatabase(request);
			needAdd = true;
		}
		else
			answer = data[rqIndex];
		
		theAnswersProcessor.AddTask(HttpAnswersProcessor::ProcessData, request, answer);
		
		if (needAdd)
			data.Add(request, answer);
	}
};

class HttpRequestsProcessor : public CallThread
{
public:
	void ProcessHttpRequest(String rq)
	{
		//some processing here
		
		//ok, you need users`s info
		theCachedSQL.AddTask(CachedSQL::GetDataGiveToAnswerer, user);
		
		//optional additional processing
	}
};

class HttpAnswersProcessor : public CallThread
{
public:
	void ProcessHttpAnswer(String userInfo)
	{
		//some processing and sending answer
	}
};

///////////////////////////////////////////////////////////////////////////
// Let`s imagine 2 users sent their requests simultaneously.
//
// Time offset:
// 0         1        2         3         4         5         6         7
// 0123456789012345689012345678901234567890123456789012345678901234567890
//--------------------------------------------------------------------------
// Synchronous work
// |--request1--|--sql1--|--answer1--|--request2--|--sql2--|--answer2--|...
// user1 will wait 3.5
// user2 will wait 6.9
//--------------------------------------------------------------------------
// Asyncronous work; '%' symbol stands for queue event i/o/thread-awake
// |--request1--|
//            %|--sql1--| *cache-miss; adding to data AFTER answer answer sent to httpAnswerer*
//                   %|--answer1--|
//              |--request2--|
//                         %|--sql2--| *value cached, no need to add this time*
//                                   %|--answer2--|
// user1 will wait 3.2
// user2 will wait 4.8
//--------------------------------------------------------------------------


Previous Topic: subversion client recommendation
Next Topic: Google Chrome Operative System
Goto Forum:
  


Current Time: Thu Mar 28 19:58:54 CET 2024

Total time taken to generate the page: 0.01542 seconds