> The former, actually. I would have put it in detail if I

Fair enough. I had assumed it was like the tree behind the std

> intended to protect it.

associative containers - i.e., not for public use.

One further thought - it would be nice if you could improve the

efficiency of bulk inserts. I tried adding the following

function to associative_vector:

template<class Iterator>

void insert_equal_x(Iterator first, Iterator last) {

m_impl.insert(m_impl.end(), first, last);

std::sort(m_impl.begin(), m_impl.end(),

compose_f_gx_hy(m_key_compare, KeyOfValue(), KeyOfValue()));

}

I then used the following test function

void test(int n, int m)

{

std::vector<int> initial(n);

std::generate_n(initial.begin(), n, std::rand);

std::vector<int> add(m);

std::generate_n(add.begin(), m, std::rand);

avint av1;

av1.insert_equal_x(initial.begin(), initial.end());

avint av2;

av2.insert_equal_x(initial.begin(), initial.end());

std::cout << n << "," << m << std::endl;

{

boost::timer t;

av1.insert_equal(add.begin(), add.end());

std::cout << "\t" << t.elapsed() << std::endl;

}

{

boost::timer t;

av2.insert_equal_x(add.begin(), add.end());

std::cout << "\t" << t.elapsed() << std::endl;

}

}

and measured the results for various values of n and m. The results

were fascinating

1,10000

1.1

0.025

10,10000

0.65

0.025

100,10000

0.643

0.026

1000,10000

0.754

0.028

10000,10000

1.773

0.052

10000,1000

0.131

0.035

10000,100

0.016

0.058

10000,10

0.004

0.466

10000,1

0.004

2.517

It suggests that there is probably some rule that could be used to

switch between the two algorithms. Either

1. size() < N, or

2. distance(first, last) > M, or

3. distance(first, last) / size() > P.

I would think that at least vector_multiset's ranged constructor

could take advantage of this alternative.

For insert_unique perhaps something along the lines of

template<class Iterator>

void insert_unique_x(Iterator first, Iterator last) {

m_impl.insert(m_impl.end(), first, last);

std::sort(m_impl.begin(), m_impl.end(),

compose_f_gx_hy(m_key_compare,

KeyOfValue(),

KeyOfValue()));

m_impl.erase(std::unique(m_impl.begin(),

m_impl.end(),

compose_f_gx_hy(m_key_compare,

KeyOfValue(),

KeyOfValue())),

m_impl.end());

}

would be advantageous in certain circumstances.

Just a thought.

Mark- On Sun, 4 Jun 2000, David Abrahams wrote:

> > For lower_bound, use

Note that it is the opposite of the test which a reasonable

> > compose_f_gx_hy(m_key_compare, std::identity(), KeyOfValue()));

> > For upper_bound, use

> > compose_f_gx_hy(m_key_compare, KeyOfValue(), std::identity()));

> >

> > These will work for reasonable implementations of lower/upper_bound.

> >

> > In case of unreasonable implementations or for binary_search, both

> > members are needed.

>

> Actually, I think according to the standard they must work for all

> conforming implementations. The standard gives no leeway to compute

> lower_bound with comparisons other than as specified by:

>

> 2 Effects: Finds the first position into which value can be inserted without

> violating the ordering.

>

> 3 Returns: The furthermost iterator i in the range [ first, last] such that

> for any iterator j in the

>

> range [ first, i) the following corresponding conditions hold: *j < value or

> comp(*j, value) != false

implementation will make. The algorithm is value < *j but the above

uses *j < value. I don't think that the standard prevents an

implementation from using the compare object in the reverse direction.

There may be an "optimization". Cover your six?

> As far as binary_search is concerned, well, I don't ever really need to use

Yes. Binary_search has limited use. However, equal_range could use

> that, do I? lower_bound makes so much more sense...

the algorithm rather than using lower_bound and upper_bound. It would

also require both directions.

> > I don't think one of these can be generated by compose.

Yes.

>

> You mean a "GeneralCompare" object? Probably not ;)

Later,

John