Search – Data Science, Data Analytics and Machine Learning Consulting in Koblenz Germany https://www.rene-pickhardt.de Extract knowledge from your data and be ahead of your competition Tue, 17 Jul 2018 12:12:43 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Building an Autocompletion on GWT with RPC, ContextListener and a Suggest Tree: Part 0 https://www.rene-pickhardt.de/building-an-autocompletion-on-gwt-with-rpc-contextlistener-and-a-suggest-tree-part-0/ https://www.rene-pickhardt.de/building-an-autocompletion-on-gwt-with-rpc-contextlistener-and-a-suggest-tree-part-0/#comments Wed, 13 Jun 2012 13:15:29 +0000 http://www.rene-pickhardt.de/?p=1360 Over the last weeks there was quite some quality programming time for me. First of all I built some indices on the typology data base in which way I was able to increase the retrieval speed of typology by a factor of over 1000 which is something that rarely happens in computer science. I will blog about this soon. But heaving those techniques at hand I also used them to built a better auto completion for the search function of my online social network metalcon.de.
The search functionality is not deployed to the real site yet. But on the demo page you can find a demo showing how the completion is helping you typing. Right now the network requests are faster than google search (which I admit it is quite easy if you only have to handle a request a second and also have a much smaller concept space).  Still I was amazed by the ease and beauty of the program and the fact that the suggestions for autocompletion are actually more accurate than our current data base search. So feel free to have a look at the demo:
http://134.93.129.135:8080/wiki.html
Right now it consists of about 150 thousand concepts which come from 4 different data sources (Metal Bands, Metal records, Tracks and Germen venues for Heavy metal) I am pretty sure that increasing the size of the concept space by 2 orders of magnitude should not be a problem. And if everything works out fine I will be able to test this hypothesis on my joint project related work which will have a data base with at least 1 mio. concepts that need to be autocompleted.
Even though everyting I used but the ContextListener and my small but effective caching strategy can be found at http://developer-resource.blogspot.de/2008/07/google-web-toolkit-suggest-box-rpc.html and the data structure (suggest tree) is open source and can be found at http://sourceforge.net/projects/suggesttree/ I am planning to produce a series of screencasts and release the source code of my implementation together with some test data over the next weeks in order to spread the knowledge of how to built strong auto completion engines. The planned structure of these articles will be:

part 1: introduction of which parts exist and where to find them

  • Set up a gwt project
  • Erease all files that are not required
  • Create a basic Design

part 2: AutoComplete via RPC

  • Neccesary client side Stuff
  • Integration of SuggestBox and Suggest Oracle
  • Setting up the Remote procedure call

part 3: A basic AutoComplete Server

  • show how to fill data with it and where to include it in the autocomplete
  • disclaimer! of not a good solution yet
  • Always the same suggestions

part 4: AutoComplete Pulling suggestions from a data base

  • inlcuding a data base
  • locking the data base for every auto complete http request
  • show how this is a poor design
  • demonstrate low response times speed

part 5: Introducing the context Listener

  • introducing a context listener.
  • demonstrate lacks in speed with every network request

part 6: Introducing a fast Index (Suggest Tree)

  • inlcude the suggest tree
  • demonstrate increased speed

part 7: Introducing client side caching and formatting

  • introducing caching
  • demonstrate no network traffic for cached completions

not covered topics: (but for some points happy for hints)

  • on user login: create personalized suggest tree save in some context data structure
  • merging from personalized AND gobal index (google will only display 2 or 3 personalized results)
  • index compression
  • schedualing / caching / precalculation of index
  • not prefix retrieval (merging?)
  • css of retrieval box
  • parallel architectures for searching
]]>
https://www.rene-pickhardt.de/building-an-autocompletion-on-gwt-with-rpc-contextlistener-and-a-suggest-tree-part-0/feed/ 6
Google Video on Search Quality Meeting: Spelling for Long Queries by Lars Hellsten https://www.rene-pickhardt.de/google-video-on-search-quality-meeting-spelling-for-long-queries-by-lars-hellsten/ https://www.rene-pickhardt.de/google-video-on-search-quality-meeting-spelling-for-long-queries-by-lars-hellsten/#respond Mon, 12 Mar 2012 19:11:04 +0000 http://www.rene-pickhardt.de/?p=1196 Amazing! Today I had a discussion with a coworker about transparency and the way companies should be more open about what they are doing! And what happens on the same day? One of my favourite webcompanies has decided to publish a short video taken from the weekly search quality meeting!
The proposed change by Lars Hellsten is that instead of only checking the first 10 words for possible spelling corrections one could predict which two words are most likely spelled wrong and add an additional window of +-5 words around them. They discuss how this change has much better scores than the old one.
The entire video is interesting because they say that semantic context is usually given by using 3 grams. My students used up to 5 grams in order to make their scentence prediction and the machine learning already told them that 4grams would be sufficient to make syntactically and semantically correct predictions.
Anyway enjoy this great video by Google and thanks to Google for sharing this:

]]>
https://www.rene-pickhardt.de/google-video-on-search-quality-meeting-spelling-for-long-queries-by-lars-hellsten/feed/ 0
balanced binary search trees exercise for algorithms and data structures class https://www.rene-pickhardt.de/balanced-binary-search-trees-exercise-for-algorithms-and-data-structures-class/ https://www.rene-pickhardt.de/balanced-binary-search-trees-exercise-for-algorithms-and-data-structures-class/#comments Tue, 29 Nov 2011 14:20:40 +0000 http://www.rene-pickhardt.de/?p=971 I created some exercises regarding binary search trees. This time there is no coding involved. My experience from teaching former classes is that many people have a hard time understanding why trees are usefull and what the dangers of these trees is. Therefor I have created some straight forward exercises that nevertheless involve some work and will hopefully help the students to better understand and internalize the concepts of binary search tress which are in my oppinion one of the most fundamental and important concepts in a class about algorithms and data structures.

Part A: finding elements in a binary search tree – 1 Point

You are given a binary search tree and you know the root element has the value 2. Considering that the path to for finding an element in the tree is unique decide which of the following two lists can be an actual traversal part in order to receive the element 363 from the binary search tree? Why so?

  • 2, 252, 401, 398, 330, 344, 397, 363
  • 2, 252, 397, 398, 330, 344, 401, 363

Part B: Create binary search trees – 1 Point

You are given an empty binary search tree and two lists of the same elements.

  • 10, 20, 5, 15, 2, 7, 23
  • 10, 5, 7, 2, 20, 23, 15

For both lists draw all the trees that are created while inserting one element after the other one.

Part C: skewed binary search trees and traversing trees – 1 Point

Compare the trees from part B to the tree you would get if inserting the numbers in the order of 2, 5, 7, 10, 15, 20, 23
To understand the different tree traversals please give the result of the inorder and preorder traversal applied to the trees from part B and C.

Part D: Balanced binary search trees. Counting Permutations – 2 Point

We realize that trees can have different topologies as soon as the order of the inserted items changes. Since balanced trees are most desired your task is to count how many permutations of our 7 elements will lead to a balanced binary search tree!
To do so it is sufficient to write down all the permutations that will lead to a balanced binary search tree. But you do not have to do this explicitly. It is also ok to write down all classes and cases of permuations and count them.
Compare the number to all permutations of 7 elements (= 7!) and give the probability to end up with a balanced binary search tree when given a random permutation of 7 different elements.

Part E: A closed formular for the probability to create a balanced binary search tree – 2 Extra Points

Your task is to find and prove a formular that states the number of permutations of the natural numbers 1, 2,…, 2^k-1 such that inserting the numbers will create a balanced binary search tree.
Give a closed forumlar for the probability P(k) to end up with a balanced search tree. Give the explicit results for k = 1,…,10

]]>
https://www.rene-pickhardt.de/balanced-binary-search-trees-exercise-for-algorithms-and-data-structures-class/feed/ 2
My Blog guesses your name – Binary Search Exercise for Algorithms and data structures class https://www.rene-pickhardt.de/my-blog-guesses-your-name-binary-search-exercise-for-algorithms-and-data-structures-class/ https://www.rene-pickhardt.de/my-blog-guesses-your-name-binary-search-exercise-for-algorithms-and-data-structures-class/#comments Mon, 07 Nov 2011 11:59:05 +0000 http://www.rene-pickhardt.de/?p=857 Binary Search http://en.wikipedia.org/wiki/Binary_search_algorithm is a very basic algorithm in computer science. Despite this fact it is also important to understand the fundamental principle behind it. Unfortunately the algorithm is tought so early and the algorithm is so simple that beginning students sometimes have a hard time to understand the abstract principle behind it. Also many exercises are just focused on implementing the algorithm.
I tried to provide an exercise that aims and focusses on the core principle of binara search rather than implementation. The exercise is split into three parts.

Excercise – Binary Search

Your task is to write a computer program that is able to guess your name!

Feel free to check out the following applet that enables my blog to guess your name in less than 16 steps!


In order to achieve this task we will look at three different approaches.

Part 1

  • Download this file containing 48’000 names. It is important for me to state that this file is under GNU public licence and I just processed the original much richer file which you can find at: http://www.heise.de/ct/ftp/07/17/182/.
  • Now you can apply the binary search algorithm (imperative implementation as well as recursive implementation) to let the computer guess the name of the user
  • Provide a small User interface that makes it possible for the user to give feedback if the name would be found befor or after that guess in a telephone book

Part 2

It could happen though that some rare names are not included in this name list. That is why your task now is to use a different approach:

  • Let the Computer ask the user of how many letters his name consists.
  • Create a function BinarySearch(String from, String to, length) and call it for example with the parameters “(AAAA”,”ZZZZ”,4)
  • Use the function LongToString in order to map the String to a Long that respects the lexicographical order and enables to find the middle value of the two strings to implement this
  • Use the function LongToString and the user interface from part one in order to provide the output to the user

private static String LongToString(long l,int length) {
String result = "";
for (int i=0;i
private static long StringToLong(String from) {
long result=0;
int length=from.length();
for (int i=0;i

Part 3

Our program from exercise 2 is still not able to guess names with a special charset which is important for many languages. So your task is to fix this problem by improving the approach that you have just implemented.
One way to do this is to understand that char(65)=A, char(66)=B,... and transfer this to create a sorted array of letters that are allowed to be within a name.
Now choose freely one of the following methods:
Method A
Improve LongToString and StringToLong so it can use the array instead of the special characters.
Method B
Start guessing the name letter for letter by using this array. You would guess every letter with the help of binary search.

Part 4 (discussion)

  • Explain shortly why approach number 2 will take more guesses in general than approach one.
  • Explain shortly why Method A and B will need the same amount of guesses in the worst case.
  • If you would already know some letters of the Name does this have an influence on the method you choose. Why?
]]>
https://www.rene-pickhardt.de/my-blog-guesses-your-name-binary-search-exercise-for-algorithms-and-data-structures-class/feed/ 4
My opinion on the antitrust investigation by Federal Trade Commission against Google https://www.rene-pickhardt.de/my-opinion-on-the-antitrust-investigation-by-federal-trade-commission-against-google/ https://www.rene-pickhardt.de/my-opinion-on-the-antitrust-investigation-by-federal-trade-commission-against-google/#respond Sun, 26 Jun 2011 22:58:27 +0000 http://www.rene-pickhardt.de/?p=608 In the last days, there was a lot of media coverage on the the antitrust investigation by the FTC against Google. In my opinion, this investigation is ridiculous. Let me explain why:

The Facts From an FTC Point of view

  • Google is the leading search engine on the net.
  • Companies and the FTC seem to believe that there is a bias. Google seem to display their own products in the search results more frequently than similar content from competitors (e.g. Maps vs. Yelp or Youtube vs. Vimeo).
  • The combination is considered to an abuse of Google’s market share in search.

Google’s Reaction

You can read the official answer to these accusations on the Google blog. They don’t start a discussion about it but basically state their noble goals and achievements:

  • Google supports open standards and open source
  • Google does not try to lock the users in. Instead, contrary to most companies on the web, they have the data liberation front to give users the chance to escape from Google products and take their data with them.
  • Competition on the web is hard and the alternatives are only one click away
  • They believe in “user first” and create search in a way that satisfies the user
  • Using Google is a choice. If they don’t put their users first, they will lose them.

You can find more detailed information on Google’s perspective here.

My Opinion.

  1. First of all, Google products like Youtube belong to the most relevant products on the web. I am pretty sure an algorithm without any bias would rank resources from Youtube first and videos from other websites second.
  2. Additionally, since search for rich media content like videos is much harder than text retrievel, it makes sense to use all the information you have for solving issues regarding information retrieval. So it is obvious that finding the relevant videos from Youtube is much easier for Google than finding the most relevant videos from other websites.
  3. Google is a service offering a direction to their customers. These directions come fast and in outstanding quality. That is why Google became a world wide market leader in search. If their search results were biased and they would put their own products to the user, they would take a huge risk. If the user didn’t like the search result, they would quickly lose him as a customer.
  4. Assume the accusation is true. It is still Google’s good right to do that. Web search is a very hard problem. Google is a normal enterprise. There is no law that web search has to be the most objective search that is possible. Especially since there is no universal truth or ranking to compare with anyway.

The internet is one of the hardest markets to compete in. If you change your product for the worse, the alternative is indeed only one click away. Therefor, antitrust investigations on the internet make almost no sense. By creating an extraordinary product, focusing on users needs and having high ideals, Google was able not only to remain successful over a decade, but also to continue growing. This would be impossible with a dishonest product on the web. So dear FTC, before chasing Google with antitrust investigations, I suggest to chase some other companies.
Yahoo: Look at their portal. Yahoo was the leading search engine before Google was launched. After a while, they started pushing a lot of additional products on their users. Many of them are still out there and produce good revenue streams for Yahoo. (Editorial News, Mail, …)
Facebook: They mislead everyone who has a fan page by not communicating clearly that having a thousand fans does not really offer the opportunity to communicate with all of them or their friends. Also, look a the Open Graph API. This API is everything but open. This name is one of the most misleading marketing devices I have seen in my life.
Microsoft: Internet explorer is still one of the most widely used browsers in the world. Talk to any technically savvy person and you will realize that this is not about a good product but rather because Microsoft abused their reach. By the way, in Google Chrome’s search box, you can choose right away which search engine should be your default. They don’t force you to use Google.
Facebook / Microsoft + Bing: If you find a person by searching for them on Bing and that person happens to be on Facebook, Bing offers you the option to send them a message directly without opening Facebook. Look at this video and decide for yourself. Remember Microsoft invested 240 $ to buy 1.6% of Facebook shares.

Apple: The popularity of the iPod allowed them to create the iPhone, a product locking in people. You cannot use certain standard products and software on it because Apple doesn’t want to pay the licence fee. You get locked in iTunes. There is almost no way to export your music out of iTunes to other systems or devices.
Ebay / Paypal: Well yes, Paypal solved problems. But was it neccesary to market it this aggressively on ebay?
Disclaimer: For all the companies and products I mentioned – except Facebook – I still think it is their right to do what I stated here. To quote a famous man: “It’s ok. Let the market decide. It is called competition.” (Eric Schmidt)
offtopic: I talked about how Google is not locking in users. Ther is a funny video from the onion network about this topic which I want to share with you:

]]>
https://www.rene-pickhardt.de/my-opinion-on-the-antitrust-investigation-by-federal-trade-commission-against-google/feed/ 0
What are the 57 signals google uses to filter search results? https://www.rene-pickhardt.de/google-uses-57-signals-to-filter/ https://www.rene-pickhardt.de/google-uses-57-signals-to-filter/#comments Tue, 17 May 2011 22:58:16 +0000 http://www.rene-pickhardt.de/?p=397 Since my blog post on Eli Pariser’s Ted talk about the filter bubble became quite popular and a lot of people seem to be interested in which 57 signals Google would use to filter search results I decided to extend the list from my article and list the signals I would use if I was google. It might not be 57 signals but I guess it is enough to get an idea:

  1. Our Search History.
  2. Our location – verfied -> more information
  3. the browser we use.
  4. the browsers version
  5. The computer we use
  6. The language we use
  7. the time we need to type in a query
  8. the time we spend on the search result page
  9. the time between selecting different results for the same query
  10. our operating system
  11. our operating systems version
  12. the resolution of our computer screen
  13. average amount of search requests per day
  14. average amount of search requests per topic (to finish search)
  15. distribution of search services we use (web / images / videos / real time / news / mobile)
  16. average position of search results we click on
  17. time of the day
  18. current date
  19. topics of ads we click on
  20. frequency we click advertising
  21. topics of adsense advertising we click while surfing other websites
  22. frequency we click on adsense advertising on other websites
  23. frequency of searches of domains on Google
  24. use of google.com or google toolbar
  25. our age
  26. our sex
  27. use of “i feel lucky button”
  28. do we use the enter key or mouse to send a search request
  29. do we use keyboard shortcuts to navigate through search results
  30. do we use advanced search commands  (how often)
  31. do we use igoogle (which widgets / topics)
  32. where on the screen do we click besides the search results (how often)
  33. where do we move the mouse and mark text in the search results
  34. amount of typos while searching
  35. how often do we use related search queries
  36. how often do we use autosuggestion
  37. how often do we use spell correction
  38. distribution of short / general  queries vs. specific / long tail queries
  39. which other google services do we use (gmail / youtube/ maps / picasa /….)
  40. how often do we search for ourself

Uff I have to say after 57 minutes of brainstorming I am running out of ideas for the moment. But this might be because it is already one hour after midnight!
If you have some other ideas for signals or think some of my guesses are totally unreasonable, why don’t you tell me in the comments?
Disclaimer: this list of signals is a pure guess based on my knowledge and education on data mining. Not one signal I name might correspond to the 57 signals google is using. In future I might discuss why each of these signals could be interesting. But remember: as long as you have a high diversity in the distribution you are fine with any list of signals.

]]>
https://www.rene-pickhardt.de/google-uses-57-signals-to-filter/feed/ 126
Social news streams – a possible PhD research topic? https://www.rene-pickhardt.de/social-news-streams-a-possible-phd-research-topic/ https://www.rene-pickhardt.de/social-news-streams-a-possible-phd-research-topic/#comments Mon, 25 Apr 2011 22:03:08 +0000 http://www.rene-pickhardt.de/?p=351 It is two months now of reading papers since I started my PhD program. Enough time to think about possible research topics. I am more and more interested in search, social networks in general and social news streams in particular. It is obvious that it is becoming more and more important to aggregate news around a users interests and social circle and display them to the user in an efficient manner. Facebook and Twitter are doing this in an obvious way but also Google, Google News and a lot of other sites have similar products.

To much information in one’s social environment

In order to create a news stream there is the possibility to just show the most recent information to the user (as Twitter is doing it). Due to the huge amount of information created, one wants to filter the results in order to gain a higher user experience. Facebook first started to filter the news stream on their site which lead to the widely spread discussion about their ironically called EdgeRank algorithm. Many users seem to be unhappy with the user experience of Facebook’s Top News.
Also for some information such as the existence of an event in future it might not be the best moment to display the information as soon as it becomes available.

Interesting research hook points and difficulties

I observed these trends and realized that this problem can be seen as a special case of search or more general recommendation engines in information retrieval. We want to obtain the most relevant information updates around a certain time window for every specific user.
This problem seems to me algorithmically much harder than web search where the results don’t have this time component and for a long time also haven’t been personalized to the user’s interest. The time component makes it hard to decide the question for relevance. The information is new and you don’t have any votes or indicators of relevance. Consider a news source or person in someone’s environment that wasn’t important before. All of a sudden this person could provide a highly relevant and useful information to the user.

My goal and roadmap

Fortunately in the past I have created metalcon.de together with several friends. Metalcon is a social network for heavy metal fans. On metalcon users can access information (cd releases, upcoming concerts, discussions, news, reviews,…) about their favorite music bands, concerts and venues in their region and updates from their friends. These information can perfectly be displayed in a social news stream. On the other hand metalcon users share information about their taste of music, the venues they go to and the people they are friend with.
This means that I have a perfect sandbox to develop and test (with real users) some smart social news algorithms that are supposed to aggregate and filter the most relevant news to our users based on their interests.
Furthermore regional information and information about music are available as linked open data. So the news stream can easily be enriched with semantic components.
Since I am about to redesign (a lot of work) metalcon for the purpose of research and I am about to go into this direction for my PhD thesis I would be very happy to receive some feedback and thoughts about my suggestions of my future research topic. You can leave a comment or contact me.
Thanks you!

Current Achievments:

]]>
https://www.rene-pickhardt.de/social-news-streams-a-possible-phd-research-topic/feed/ 4
Facebook User Search: Ever wondered how Facebook is more social than others? https://www.rene-pickhardt.de/facebook-user-search-ever-wondered-how-facebook-is-more-social-than-others/ https://www.rene-pickhardt.de/facebook-user-search-ever-wondered-how-facebook-is-more-social-than-others/#comments Mon, 14 Mar 2011 21:58:03 +0000 http://www.rene-pickhardt.de/?p=299 After Eli’s talk on TED and my recent article about the filter bubble I decided to dig a little deeper into Facebook’s EdgeRank algorithm, which decides what Updates appear in your news feed. I found some more scientific background on how EdgeRank really works. Even though EdgeRank was first mentioned on Facebook F8 Live on April 21st in 2010 it is already mentioned in a footnote of the scientific paper “All friends are not equal: using weights in social graphs to improve search” by Sudheendra Hangal, Diana MacLean, Monica S. Lam and Jeffrey Heer all from Computer Science department at Stanford university.
Inspired by this paper I run a little test to compare user Search of Facebook and once (a long time ago) Germanys biggest social networrk StudiVZ. Not surprisingly Facebook clearly won the battle. But let me first give a brief overview on how social networks rose in Germany.

History of Facebook and StudiVZ

So in Germany we there was this Facebook clone – let’s call it StudiVZ – starting in late 2005. Due to the fact that hardly anyone knew of Facebook and StudiVZ started some great word of mouth marketing (and stole the entire design from Facebook) it spread very quickly and became THE social network in Germany. In 2007 / 2008 no one would have imagined how the most popular German Website could ever fall back. StudiVZ (being aquired by a traditional media company) tried to make advertising dollars. While Facebook started to gain real social network know how. Not surprisingly Facebook passed by StudiVZ within a couple of months while 2010.

The Experiment: How good is the user search on social networks?

A must have feature of every social networking site is the user search. So I wanted to test how good does the user search on both sites work. (already knowing that Facebook would easily win this battle) I thought of a person with a very common name that is not a friend of mine on ether of these social networking sites.
After a little bit of thinking I came to Sebastian Jung. On Facebook as well as on StudiVZ he is registered with his real Name. (along with about 140 other Sebastian Jungs in Germany) Sebstian was in my grade in high school together with 130 other students. I hardly know him.

Search for Sebastian Jung on StudiVZ:

Typing his name in StudiVZ brings up his profile to the 4th position. Lucky me that he has recently updated his StudiVZ profile which is to my knowledge the variable the user search results are sorted by. If he hadn’t done this he would have disappeared somewhere between those 140 other Sebstian Jung’s that have a StudiVZ profile with the same name.

Search for Sebstian Jung on Facebook:

Typing his name into Facebook search immidiately shows his profile on the first position. In my case this is particular interesting but let us first explore why Facebook does so well.

How does Facebook user search rank the results?

Of course the exact algorithm is secrete but the idea is easy. As everyone knows we can measure the distance between to people in a social network by the shortest path of people between those two people. Uff. Shortest path?!? What does this mean?
For Sebastian Jung and me this shortest path would be of length 1 since I have some friend from my old school that is a friend of Sebastian Jung. Which in turn means there is one person between Sebastian Jung an me.
For our German Chancellor and me the distance would probably be 3 (wild guess) but I think you get the point. So what facebook does is to sort all the Sebastian Jungs on the result Page according to their distance from me. Pretty smart isn’t it? But Facebook is probably even using a little bit more information. Let us assume I have 4 common friends with this Sebastian Jung and maybe 1 common friend with another Sebastian Jung. The distance in both cases would be 1. But the one I have more common friends with is still probably more relevant to me and will most probably be shown first.

Oh and why is this particular interesting for my case?

You can call me paranoid or something but I am still afraid that facebook knows to much about me if I tell them more about my friendships. That’s why I decided to have 0 friends on Facebook. Obviously Facebook is not only using actual friendships that exist but also the 120 friendshiprequests I have received so far and other knowledge (maybe people have uploaded my email address together with their address book) Anyway this experiment show that my fear obviously has a reason but it also shows that I clearly failed to protect my most sensitive data from Facebook.

Conclusion:

  1. Still I am very convinced that Facebook’s success is due to the fact that these little things just silently work perfect in the background producing great user satisfaction.
  2. As I always say. You cannot steal an idea on the Internet. If you don’t understand the Idea you might have a short success but then you’ll fail because your product will just not be as good as your competitors product
  3. If you want to be successful on the Internet don’t focus on selling ads and making money in the first place. Look what the big players have been doing! Focus on user satisfaction. If your users are happy I am pretty sure the money and reward will come to you!
  4. Even though the pages look a like and StudiVZ is still copying features from Facebook they oviously don’t understand the essence of these features and what exactly makes them great. Otherwise after 5 years of operations they would be able to have a good running user search which should be the kernel of any social networking service.
  5. Much to learn and improve for my own social network Metalcon that has a crappy search function over all (-:
  6. 6. I still haven’t digged deeper into the EdgeRank Algorithm 🙁

I am happy to read about your comments and thoughts as well as your experiments to user search with Facebook and other social networks. What other (technical (!)) reasons do you think make Facebook the superior social network in comparison to sites like myspace, orkut, studiVZ, bebo,… ?

]]>
https://www.rene-pickhardt.de/facebook-user-search-ever-wondered-how-facebook-is-more-social-than-others/feed/ 2