## Merge sort failing to work in N logN

Question

I have written a merge sort algorithm, but it fails to sort in N log N. The code for sorting an array list is:

```
void merge(int start, int mid, int end) {
int i,j,q;
for (i=start; i<=end; i++)
l[i]=list[i];
i=start; j=mid+1; q=start;
while (i<=mid and j<=end) {
if (l[i]<l[j])
list[q++]=l[i++];
else
list[q++]=l[j++];
}
while (i<=mid)
list[q++]=l[i++];
}
void mergesort(int start, int end) {
int mid;
if (start<end) {
mid=(start+end)/2;
mergesort(start, mid);
mergesort(mid+1, end);
merge(start, mid, end);
}
}
```

However, if I for example sort 7800 numbers, the runtime is approximately 1.243 ms. The same sample is sorted by std::sort in 0.668 ms, and I know that sort has a complexity of N logN. What is wrong with my code? I cannot seem to find the waste of time.

Measurement of time:

```
#include <time.h>
clock_t start = clock();
//SORTING ALGORITHM HERE//
clock_t stop = clock();
double elapsed =(stop - start) * 1000.0 / CLOCKS_PER_SEC;
```

Show source

## Answers ( 2 )

Assuming your implementation is correct, two O(N logN) won't necessarily run at the same amount of time. Asymptoptic complexity is a measure of how much the resources required to run a program grow as the input becomes very large. Just to give you an example, the following loops are both O(1), since each of them always run a constant number of steps:

But there's no question that the second will take much longer to run. As a matter of fact, the runtime gap between these two loops will be significantly larger than the gap you are observing for your sort algorithm versus

`std::sort`

. That's because asymptoptic analysis neglects constants.Moreover, asymptoptic complexity is usually for the average or worst case scenario. The same algorithm can run in more or less time for inputs of equal size depending on the data.

Not to mention that

`std::sort`

is very likely not a single sorting algorithm. It probably uses different strategies depending on the size of the array. In fact, somple implementations of`std::sort`

use a mixture of algorithms.The proper way to analyze a program's complexity is by reading the code. For a numerical approach, the closest you can do is to run

your program, without comparing it to other programs, for several inputs of different sizes. Plot a graph an observe the curve.In the case of Visual Studio, std::sort() is a mix of quick sort, heap sort (only to prevent worst case O(n^2) time complexity), and insertion sort, while std::stable_sort(), is a mix of merge sort and insertion sort. Both are reasonably fast, but it's possible to write faster code. The example code in the question is copying data before every merge, which consumes time. This can be avoided by doing a one time allocation of a working buffer, and switching merge direction based on level of recursion, using a pair of mutually recursive functions (shown below), or a boolean parameter to control merge direction (not used in example below).

Example C++ code for top down merge sort that is reasonably optimized (bottom up merge sort would be slightly faster, as it skips the recursion used to generate indices, using iteration instead).