-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of MapReduce using TTG #221
base: master
Are you sure you want to change the base?
Conversation
using namespace ttg; | ||
|
||
template<typename T> | ||
using Key = std::pair<std::pair<std::string, T>, T>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to send the filename as part of the key? Is it used after reading from the file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example counts words from multiple files. In order to keep the keys unique, I included filename in the key. We can probably come up with a different way of having unique keys as well without using the filename like a file ID for example.
{ | ||
std::transform(word.begin(), word.end(), word.begin(), ::tolower); | ||
//std::cout << "Mapped " << word << std::endl; | ||
resultMap.insert(std::make_pair(word, 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the chunk contains the same word twice? And why use a multimap in the first place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used multimap to be able to hold duplicate keys and have them sorted which makes counting easier in this example.
This example counts the words in given files and prints the output to screen. Reduction is inefficient since it happens in a single process. This example works with the parsec backend, but there is an issue with the madness backend where it complains about unprocessed tasks, but the tasks actually get executed. This could be a problem with hashing std::string and needs to be fixed.