Quote: |
2. A lot of built in features, like variable length array and hash maps, which combined with some extra functions, can be used to create types like Vector or Map, but even more easy to use and without templates. char[] and about 10 extra functions could make String and StringBuffer obsolete. |
Quote: |
3. Templates + mixins + static ifs are stronger than templates in C++ + preprocessor. I never used them, because you can do in D a lot more without templates than in C++. |
Quote: |
4. Optional (but defaulted to true) garbage collection. As much as you could dislike the idea of garbage collection, memory management in C++ is a nightmare, and from this point of view, even U++ which has a lot less such issues, is not able to give such pain free management. |
Quote: |
I think new C++ should break compatibility and be more like D. I don't understand why it cannot be since all current/old apps can be developed with old compilers. This way C++ will be fatter and fatter (and more complicated) with each standard revision. |
Quote: |
I prefer RAII approach and U++ is a very good example that this really works. Frankly if you use NTL or STL (and follow the RAII way) there is a rare situation when you have to worry about memory management. I don't know why this still is an issue. |
Quote: |
Again, I just don`t like an idea of uncontrolled garbage collection. Developing some time-cricical programms for industrial automation, I dislike the fact that my time-critical code can be interrupted or slowned down by some uncontrolled process of garbage collection. |
luzr |
U++ is about twice as fast as D, with shorter code |
cbpporter |
They are by far not uncontrolable, and only happen in some apps. |
#include <Core/Core.h> using namespace Upp; #define BENCHMARK // for benchmark purposes, output is omitted #ifdef BENCHMARK #define BENCHBEG for(int i = 0; i < 1000; i++) { #define BENCHEND } #else #define BENCHBEG #define BENCHEND #endif void main(int argc, const char *argv[]) { VectorMap<String, int> map; BENCHBEG for(int i = 1; i < argc; i++) { String f = LoadFile(argv[i]); int line = 1; const char *q = f; for(;;) { int c = *q; if(IsAlpha(c)) { const char *b = q++; while(IsAlNum(*q)) q++; map.GetAdd(String(b, q), 0)++; } else { if(!c) break; if(c == '\n') ++line; q++; } } } BENCHEND Vector<int> order = GetSortOrder(map.GetKeys()); #ifndef BENCHMARK for(int i = 0; i < order.GetCount(); i++) Cout() << map.GetKey(order[i]) << ": " << map[order[i]] << '\n'; #endif printf("%d\n", map.GetCount()); }
Quote: |
P.S. luzr, I just thought that D and U++ comparison is not quite honest. It would be better to compare internal language features of D and C++. According to the tests described in D site (I looked at them as you recommended), D is no slower than C++ (at least in some cases?). So porting the U++ classes and algorithms to D, adopting them for D specifics could make U+D (U++ for D) as fast as original U++. It`s just a theory, of course. |
luzr wrote on Thu, 08 November 2007 19:25 |
Well, as of const, I generally tend to thing that while initially it feels like plague, after a while you can find it quite useful, at least to describe the interface better. Been through it ages ago.... |
#define private public #include "alib.h"
Quote: |
In other words, you can also say that a well written library can do what it needs |
Quote: |
Actually, interestingly it seems like I am the only one here who in fact likes C++ as it is (except some quite small issues and the standard library, which IMO only looks like a good design). |
#define a_type mytype #define an_include </my/include/dir/a_type/mytype.hxx> #define another_include "/my/include/dir/a_type/mytype.hxx" #include an_include #include another_include
#include </my/include/dir/mytype/mytype.hxx> #include "/my/include/dir/a_type/mytype.hxx"
luzr wrote on Fri, 09 November 2007 08:51 |
Well, but that is useful feature and this is one of things I like with C++ - the "default" mode is "safe", but you can always do dirty things when you need them. |
#define private public #include "alib.h"
Quote: |
well, a well written lib should do what the coder will, *not* what the user is missing. Before using C++ hacks to overcome libs limitations, you have 3 solutions : 1) Patch the sources, if you have them 2) Ask the original programmer to enhance the lib 3) Just find another lib that suit your needs |
Quote: | ||
And if neither is possible? You quit the job? |
Quote: |
#define private public #include "alib.h" |
Quote: |
#define private public #include "alib.h" |
Quote: |
As for C++ language, I'd like much a 100% ansi-compliant C++ interpreter... Nowadays many people write their C++ programs and link them to some so-called scripting langauge (one for all: Python) to have more flexibility, but why can't I use C++ to perform the same task? I've looked into cint : while covers most (if not all) C, it is still at 85% of ansi c++, and the same authors say it will never reach the 100% goal. |
Quote: | ||
And if neither is possible? You quit the job? |
Quote: |
Modularity should be not difficult too, it's just a matter of define a new object format that contains precompiled declarations too, as borland did with their packages for delphi. All that could stay side-by-side with actual c++ implementation. |
Quote: |
Adding also a good string and array base types should not be a big problem too, and could also be much faster than actual template solutions.... so why not ? |
Quote: |
Well, but keep in mind that C++ *standard* is intended as multiplatform solution. It e.g. must not have anything in it preventing the use of language on platform that is only capable of working with 36 bit words... What you demand is possible even now - there is nothing in C++ standard that would make it impossible for specific implementation. |
Quote: |
Or you would be stuck with slow implementation and no way how to improve it... |
Quote: |
And I'm quite surprised to see people who don't like gc, but have nothing against reference counting, which is slower |
Quote: |
They did either evolve great deal since I take that fast glance on them (quite possible), or by using GC you get everything ref counted by default. |
cbpporter wrote on Sat, 10 November 2007 11:31 |
And I'm quite surprised to see people who don't like gc, but have nothing against reference counting, which is slower and almost impossible to use efficiently in a multi-threading application. |
luzr wrote on Sat, 10 November 2007 17:54 | ||
I mostly agree with this... |
Quote: |
Even pick_ can be not thread safe, if it's bad coded, and must have some sort of synchronizing code to be thread safe. |
Quote: |
I would not accept a language based mostly on GC, but I've got nothing against an optional gc among other management stuffs. |
Quote: |
BTW, I can't see how refcount can be slower than GC... maybe I'm wrong, but I'd like to have it explained ! |
RefCounted<Foo> Fn() { RefCounted<Foo> x = new Foo; ..... return x; } void Fn2() { RefCounted<Foo> y = new Foo; ... y = Fn(); }
mdelfede wrote on Sat, 10 November 2007 17:57 | ||||
As with GC, refcount can be made thread safe, IMHO. |
Quote: |
And, w.r.t. thread safety, the another trouble is that you cannot safely use atomic operations only when the object is really shared between threads (when it is needed). |
Quote: | ||
Everything can be thread unsafe if you really try. Anyway, the simplest pick_ implementation is naturaly thread safe. |
Quote: | ||
The problem is that this is not quite possible. |
Quote: |
2) Ineffective and poor framework (even that was much better than MFC) which wasn`t really upgraded since smth like 1997-1999 (leading VCL developer was bought by M$ for .NET platform). |
Quote: |
3) Wrong development direction: since first versions BC++B was strongly modified for database applications against the interest of all others. |
mdelfede |
not so ineffective nor poor... at least, an order of magnitude better than M$ one. |
mdelfede |
I don't like database apps, too, but if they did so, maybe it was for some reason. |
Quote: |
The greatest issue for me personally is that IDE "insists" on the only one programming style. |
Mindtraveller wrote on Sun, 11 November 2007 18:58 |
So, adding notes about efficiency, do you know how VCL handles it`s forms? Forms and components are converted to textual representation. The borland IDE gives text to linker, which adds these text resources to the end of .exe files. When you start any BCB application, it runs special parser (which is by default in dll!). Text resources are parsed. How? Application gives this text to internal engine, then to newly-created components which serialize their properties from text parts. That is how BCB application starts. |
Quote: | ||
Similar experience with MFC or some other environments. This is what made me sceptical about "visual tools". (Ironically, it seems like U++ starts to be quite ide supported too... I guess if you have that power - to support your library in your ide - it is just hard to resist..) |
Quote: |
So, I would say for now we do not have any adequate alternatives for C++. |
Funny quote from the FQA |
"There's a basic assumption behind C++ that extra features can't be a problem - only missing features can. That's why there are so many features in C++, and in particular so many duplicate ones. Real world analogies ("imagine a dog with twelve legs") are pale compared to this reality." |
Quote: |
I don't see anything bad on loading forms from a resource file and/or from exe file |
Glyph.Data = { 76010000424D7601000000000000760000002800000020000000100000000100 04000000000000010000120B0000120B00001000000000000000000000000000 800000800000008080008000000080008000808000007F7F7F00BFBFBF000000 FF0000FF000000FFFF00FF000000FF00FF00FFFF0000FFFFFF00555555555555 5000555555555555577755555555555550B0555555555555F7F7555555555550 00B05555555555577757555555555550B3B05555555555F7F557555555555000 3B0555555555577755755555555500B3B0555555555577555755555555550B3B 055555FFFF5F7F5575555700050003B05555577775777557555570BBB00B3B05 555577555775557555550BBBBBB3B05555557F555555575555550BBBBBBB0555 55557F55FF557F5555550BB003BB075555557F577F5575F5555577B003BBB055 555575F7755557F5555550BB33BBB0555555575F555557F555555507BBBB0755 55555575FFFF7755555555570000755555555557777775555555}
Quote: |
Ironically, it seems like U++ starts to be quite ide supported too |
Quote: |
D does seem good, and its built-in string handling and GC are apparently fast. |
Quote: | ||
...... |
Quote: |
And in D you never need pointer arithmetics. You don't even need to bother with pointers and allocations at all, except when working with C libs and once in a while when you have a shared object and need to clone in first (nasty bug). |
Quote: |
BTW, I did only that single benchmark D vs U++ - U++ was about 2x faster, but what was really shocking is that D consumed 5 times as much memory.... |
Quote: |
Another advantage is zero memory fragmentation is the mark-and-sweeper is also a moving-compactor. |
Quote: |
And if you make it generational too, you can really optimize it. |
Quote: |
So in theory, GC programs should run a lot faster. Memory is cheap, and if with 1-2 GB of memory extra you can gain sufficient extra speed, I think GC will continue to get more and more popular. But this is only theory, and that's why I'm interested in some scientifically sane benchmarks. |
Quote: |
I am no expect in GC and perhaps I am wrong about this, but IMO generational GC and moving GC are mutually exclusive. |
Quote: |
And no, you cannot have destructors and GC working together. |
Quote: |
I believe that with a couple of tricks, I am getting (with U++) more than I could get with GC - all resources are managed by program structure. |
Quote: |
Sorry for being rude, but your understanding of actual GC (and especially conservative GC) is sort of lacking. |
Quote: | ||
Well you can. With a little extra care, you can have fully functional destructors (just be sure never to physically deallocate memory, just do cleanups). But with GC you rarelly need non-trivial destructors. And if the programing language has a "scope" clause like D, things get a lot simpler. |
cbpporter wrote on Sat, 10 November 2007 11:31 |
And I'm quite surprised to see people who don't like gc, but have nothing against reference counting, which is slower and almost impossible to use efficiently in a multi-threading application. |
cbpporter |
Well these execution freezes are not worse than in JVN or .NET platforms. Actually, they can even be shorter. I would like to see some real-life samples of GC performance, not just speculation or my personal experience. Have you ever used a bigger .NET or JVM application. |
Quote: |
EDIT: I think 2.) was implemented differently: Instead of having buffer with a fixed length: The memory management thraed performed a "stop the world" (pausing all threads) periodically and fetched the buffers from all running threads. While stop the world sounds like stalling - you have to remember only some pointers to buffers have to be transfered to the menory management thread... After the transfer the world starts rotating again. |
Quote: |
Just don`t get me wrong: I develop nearly real-time applications (not RTA actually at all when we discuss Windows issues). I do work with actual hardware devices with a number of protocols. It all runs under highly truncated version of Windows, which doesn`t know much of system-hanging device drivers like CD-ROM ones. So, generally we have no big problems with OS latensy on protocol timeouts like 50-500 msecs. This way neither Java or .NET, nor similar "heavy" platforms can be used (usually those industrial computers are not that quick as Pentium3/4 to support virtual machines, and memory installed may be below even 64/128 MB). |
mdelfede wrote on Mon, 12 November 2007 22:33 |
No doubt that manual memory allocation is more efficient than GC, even if it can be a bot slower on the short time. And a good framework can help to keep things simple. Id rather extend C++ (or make some more modern language, without GC) to include some helpful features, than switch to less efficient languages. |
http://www.iecc.com/gclist/GC-faq.html |
Folk myths * GC is necessarily slower than manual memory management. * GC will necessarily make my program pause. * Manual memory management won't cause pauses. * GC is incompatible with C and C++. Folk truths * Most allocated objects are dynamically referenced by a very small number of pointers. The most important small number is ONE. * Most allocated objects have short lifetimes. * Allocation patterns (size distributions, lifetime distributions) are bursty, not uniform. * VM behavior matters. * Cache behavior matters. * "Optimal" strategies can fail miserably. |
Quote: |
I think you should really look at some proper comparisons of real efficiency impacts of using GC, rather than automatically assuming that it kills your program's performance. |
Quote: |
Also, you shouldn't just assume for sure that manual memory allocation must be more efficient than GC. This FAQ is worth a read, with an open mind, rather than being a hardened C++ "oldskool all the (hard and error-prone) way" purist. |
Quote: |
...... * Most allocated objects are dynamically referenced by a very small number of pointers. The most important small number is ONE. |
Quote: |
* Most allocated objects have short lifetimes. |
http://www.iecc.com/gclist/GC-faq.html |
Folk myths * GC is necessarily slower than manual memory management. * GC will necessarily make my program pause. * Manual memory management won't cause pauses. * GC is incompatible with C and C++. Folk truths * Most allocated objects are dynamically referenced by a very small number of pointers. The most important small number is ONE. * Most allocated objects have short lifetimes. * Allocation patterns (size distributions, lifetime distributions) are bursty, not uniform. * VM behavior matters. * Cache behavior matters. * "Optimal" strategies can fail miserably. |