Challenge: the fastest way to filter doubled items from a list.
-
I know... but the question forums are primarily used for showing code that doesn't work, so I just wanted to throw a stone in the pond. BTW: I just realized that I know you personally. We both play in the same chess club. :-D
Giraffes are not real.
You could have wrinkled the C# pond; the programming forums are for discussions, not just for "help, my code fails" kind of questions. CU.
Luc Pattyn [My Articles] Nil Volentibus Arduum
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
private List LoadUniqueList(List doubledList)
{
return doubledList.Distinct().ToList();
}LINQ is fun :D And as for using this part of the forum for challenges, I think it's a great idea - hardly anyone ever posts here to begin with, since people don't usually look at their own code and say "Damn, that was clever - I need to share it with someone!"
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
Such an algorithm is 0(n²) thus for large collection, it will be very slow. Each time, you double the collection size, it is 4 times slower. Using an HastSet or a SortedSet or a Dictionary would be much faster for large collections. The order would then be about O(n) with hashing or O(n log(n)) with Dictionary (binary search). Another alternative would be to sort the list if the order does not matters. It is then easier to skip skip or remove duplicates.
Philippe Mori
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
I'd concatenate the lists ( O(1); or O(n) if you must copy them ), then sort it ( raising complexity to O(n*log(n) ) and then remove duplicates ( O(n) again ) That means your complexity will be ruled by your sorting algorithm: choose whichever algorithm fits your data best.
-
Such an algorithm is 0(n²) thus for large collection, it will be very slow. Each time, you double the collection size, it is 4 times slower. Using an HastSet or a SortedSet or a Dictionary would be much faster for large collections. The order would then be about O(n) with hashing or O(n log(n)) with Dictionary (binary search). Another alternative would be to sort the list if the order does not matters. It is then easier to skip skip or remove duplicates.
Philippe Mori
I don't agree that hashing would make it O(n), as it requires to handle hash collisions. This if course is not a real problem for average use cases, but after all that's not what the Landau thing is about, right? ;-) Usually, this would end up taking O(n log(n)) time, even with hashing. The main difference would be the constant that is involved. Of course it is hard to tell, but I would think that the HashSet would provide highly optimized code to do just this and it would be recommended to just use that.
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
Usually I sort the list then iterate through the list, outputting each change. When a change is found its value is used to compare for the next change detection. This code outputs to a string.
iList.Sort();
txtOutput.Text = "";
foreach (string str in iList)
{
if (str != lastEntry)
{
txtOutput.Text += (str + "\r\n");
lastEntry = str;
}
}And, yes, I know that StringBuilder might be more appropriate for some lists.
I'm not a programmer but I play one at the office
-
Such an algorithm is 0(n²) thus for large collection, it will be very slow. Each time, you double the collection size, it is 4 times slower. Using an HastSet or a SortedSet or a Dictionary would be much faster for large collections. The order would then be about O(n) with hashing or O(n log(n)) with Dictionary (binary search). Another alternative would be to sort the list if the order does not matters. It is then easier to skip skip or remove duplicates.
Philippe Mori
Its always dangerous to assume that all performance issues can be assessed using big O notation as an evaluation method, particularly when not comparing apples with apples in terms of implementation specifics. Hashing resolves down to numeric comparison as opposed to String evaluation but involves a hashing function overhead, these are important factors, there are also allocation issues particular to any implementation. Trivialising performance evaluation in this way where growth rate is a factor is always a dangerous proposition.
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
public List<String> RemoveDoubles(List<string> doublelist) { return doublelist.Distinct().ToList(); }
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
You've been given several options to use pre-existing code. Makes it easier, doesn't necessarily make it faster. I can certainly see how to make your code slightly faster:
foreach (string item in doubledList) { bool x = true; foreach (string compare in UniqueList) { if (item == compare) { x = false; break; } } if (x) UniqueList.Add(item); }
There is one less if statement needed. The break command knows it is breaking the inner foreach, not the if statement it is in. If you have 100 items with one duplicate, the first outer loop saves nothing, after that, you've removed an if check for every inner loop executed. If the first and last are duplicates you've saved 4800+ if checks. (First outer loop never executes an inner loop, every loop after that executes the inner 1 less than the outer ones taken except the last loop that takes just one inner loop. My math may be wrong, but I'm sure that saves over 4800 if tests. 98/2=49 The last full inner loop will be for 98 times, the prior loop would be 97 and the first inner loop is 1 added together is 98, then 96+2 is 98, etc for 49 times 98 inner loops were taken. 49*98=4802 plus 1 for the 100th outer loop.) Maybe a sorted unique list could be faster? I'll think about that. Is a sorted list OK?
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
Here are the results of a performance test of all the ways mentioned (at the time of this writing) in this thread: skylark-software.com[^]
-
Here are the results of a performance test of all the ways mentioned (at the time of this writing) in this thread: skylark-software.com[^]
-
Good job, i dont know why anyone would downvote your post, but i think is nice to see the comparative. And surprisingly(at least to me) the most simply answers were the best.(i thought the sorted list would be the best)
Thanks. The down vote is strange to me too. Perhaps it's because I didn't post it as a CodeProject article. :confused: As I mentioned, I expected Distinct() to be about same as the sorted list and was surprised it was so much faster; apparently it's using a HashSet (or equivalent) internally. But I was really surprised at the difference in speed between the sorted list and the hash set. I thought they'd be closer to each other. I'm guessing it's probably a difference related to inserts.
-
Exactly. Iterate the input List, add new items to the HashSet and the result List.
this problem is complex!
-
Exactly. Iterate the input List, add new items to the HashSet and the result List.
Agreed, the example given will run in N^2 time, and your solution will run in N time.
You can only be young once. But you can always be immature. - Dave Barry
-
Agreed, the example given will run in N^2 time, and your solution will run in N time.
You can only be young once. But you can always be immature. - Dave Barry
Do not forget that HashSet must somehow check the constraint that there must not be duplicates. Also the time needed for that operation must be taken into account. And it will increase with the size of the Hashset.
-
Thanks. The down vote is strange to me too. Perhaps it's because I didn't post it as a CodeProject article. :confused: As I mentioned, I expected Distinct() to be about same as the sorted list and was surprised it was so much faster; apparently it's using a HashSet (or equivalent) internally. But I was really surprised at the difference in speed between the sorted list and the hash set. I thought they'd be closer to each other. I'm guessing it's probably a difference related to inserts.
Harley L. Pebley wrote:
it's using a HashSet (or equivalent) internally
It uses it's own Set type which basically is a minimal implementation of a HashSet. So, I am suprised that
Distinct
is slower than aHashSet
, because it uses a dedicated implementation instead of a general solution.Greetings - Jacek
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
so i know this thread is centuries old, but i decided to comment anyway. I haven't looked at what others have posted yet so heres my unadulterated take:
for (int i = 0; i < doubled.Count; i++)
{
if (!uniques.Contains(doubled[i]))
{
uniques.Add(doubled[i]);
}
}just one loop, although i know contains() is basically an O(n) method but the time difference doesnt seem much between your code and mine. But mine is less number of lines ;)
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
call this guy from Fsharp.Core!! >> http://msdn.microsoft.com/en-us/library/ee353861.aspx[^]
Regards Vallarasu S | FSharpMe.blogspot.com
-
To all the clever coders out here! I have a list full of strings (doubledList) and I want to create a new list containing every unique string from the first list exactly once(UniqueList). My solution is this (C#):
private List LoadUniqueList(List doubledList)
{
List UniqueList = new List();
foreach (string item in doubledList)
{
bool x = true;
foreach (string compare in UniqueList)
{
if (item == compare)
{
x = false;
}
if (!x) break;
}
if (x) UniqueList.Add(item);
}
return UniqueList;
}Can you make it lighter, faster, sharper? :java::java::java: X| Personally, I have no idea if it's even possible; otherwise it wouldn't be very interesting for me of course. Also, do you think it's a good idea to use this part of the forum similar challenges? I think that starting with a working solution for a generic problem and then discussing whether you can improve it would be a great way to create more learning opportunities.
Giraffes are not real.
You can put the strings that repeat often in the beginning of your unique string, so it will not have to iterate that many times through the loop. So it will be ordered by the amount it repeats instead of the order of original string. For example, for string string: one three two two three three unique string will be unique: three two one instead of unique: one three two because there is most threes in the string. But this will not always optimize, because there may be no most occurring string if they are completely at randomized. So for example, it would not work for random computer generated strings, but work for predictable user info. Edit: you can also order them by there length, for example if you have strings: reallyLongStringWhichIsReallyLong anotherString reallyLongStringWhichIsReallyLong shortString shortString ... Even through shortString and reallyLongString occur the same amount of times, it takes longer for computer to check if reallyLongString is in the front, so it would be more efficient to move this in the back of unique string. Also, how long will it take your function in C# to check this strings: "stuff", "str", "str", "morestuff", "str", "not unique", "evenMoreStuff", "stuff", "unique", "morestuff", "not unique"? In C++, it takes 0.124 seconds when I print results to the screen and 0.074 when I do not. in C++ with gcc:
//0.124 and 0.074
#include
#include
using namespace std;vector getUnique(vector listStr) {
vector::iterator listIt;vector uniqueStr;
vector::iterator uniqueIt;for (listIt = listStr.begin(); listIt < listStr.end(); listIt++) {
bool exists = false;
for (uniqueIt = uniqueStr.begin(); uniqueIt < uniqueStr.end(); uniqueIt++) {
if (*listIt == *uniqueIt) {
exists = true;
break;
}
}
if (!exists) uniqueStr.push_back(*listIt);
}
cout << "list: ";
for (listIt = listStr.begin(); listIt < listStr.end(); listIt++) cout << " " << *listIt;
cout << "\n\nunique list: ";
for (uniqueIt = uniqueStr.begin(); uniqueIt < uniqueStr.end(); uniqueIt++) cout << " " << *uniqueIt;
return uniqueStr;
}int main() {
vector str = {"stuff", "str", "str", "morestuff", "str", "not unique", "evenMoreStuff", "stuff", "unique", "morestuff", "not unique"};getUnique(str);
return 0;
}