interdisciplinary – Data Science, Data Analytics and Machine Learning Consulting in Koblenz Germany https://www.rene-pickhardt.de Extract knowledge from your data and be ahead of your competition Tue, 17 Jul 2018 12:12:43 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Big step towards open access by Great Britain and a comment from Neelie Kroes https://www.rene-pickhardt.de/big-step-towards-open-access-by-great-britain-and-a-comment-from-neelie-kroes/ https://www.rene-pickhardt.de/big-step-towards-open-access-by-great-britain-and-a-comment-from-neelie-kroes/#respond Sun, 19 Aug 2012 18:33:39 +0000 http://www.rene-pickhardt.de/?p=1407 During my vaccation a lot of stuff has been happened and it was just for today that I came along the following article and discussion: http://royalsociety.org/policy/projects/science-public-enterprise/report/. Yes you read correctly the royal society wants to create open access to all publications financed by the British government. What a big step! Congratulation to all British people for being such a role model.
It fits perfectly to my project related work and other discussions I was joining e.g.

Even though this development is very good to see I am not happy about how the following discussion is going on about models how to fulfill the goals from the royal society.

Neelie Kroes from the European comission posted a really nice answer!

I am glad to see this step forward. After my successful submission of Graphity and reading the copyright form of IEEE which I had to sign I really did have concerns publishing my work with them.
I am still considering not submitting to big journals and conferences anymore but just publishing on my universities website, my blog and/or on open preprint archives.

]]>
https://www.rene-pickhardt.de/big-step-towards-open-access-by-great-britain-and-a-comment-from-neelie-kroes/feed/ 0
Data mining (text analysis) for linguists on Ulysses by James Joyce & Faust by Goethe https://www.rene-pickhardt.de/data-mining-text-analysis-for-linguists-on-ulysses-by-james-joyce-faust-by-goethe/ https://www.rene-pickhardt.de/data-mining-text-analysis-for-linguists-on-ulysses-by-james-joyce-faust-by-goethe/#comments Mon, 19 Sep 2011 20:37:04 +0000 http://www.rene-pickhardt.de/?p=806 Over the weekend I met some students studying linguistics. Methods from Linguistics are very important for text retrieval and data mining. That is why in my oppinion Linguistics is also a very important part of web science. I am always concerned that most people doing web science actually are computer scientists and that much of the potential in web science is being lost by not paying attention to all the disciplines that could contribute to web science!
That is why I tried to teach the linguists some basic python in order to do some basic analysis on literature. The following script which is rather hacked than beautiful code can be used to analyse texts by different authors. It will display the following statistics:

  • Count how many words are in the text
  • count how many sentences
  • Calculate average words per sentence
  • Count how many different words are in the text
  • Count how many time each word apears
  • Count how many words appear only once, twice, three times, and so on…
  • Display the longest scentence in the text

you could probably ask even more interesting questions and analyze texts from different centuries, languages and do a lot of interesting stuff! I am a computer scientist / mathmatician I don’t know what questions to ask. So if you are a linguist feel free to give me feedback and suggest some more interesting questions (-:

Some statistics I calculated

ulysses
264965 words in 27771 sentences
==> 9.54 words per sentence
30086 different words
every word was used 8.82 times on average
faust1
30632 words in 4178 sentences
==> 7.33 words per sentence
6337 different words
==> every word was used 4.83 times on average
faust2
44534 words in 5600 sentences
==> 7.95 words per sentence
10180 different words
==> every word was used 4.39 times on average

Disclaimer

I know that this is not yet a tutorial and that I don’t explain the code very well. To be honest I don’t explain the code at all. This is sad. When I was trying to teach python to the linguists I was starting like you would always start: “This is a loop and that is a list. Now let’s loop over the list and display the items…” There wasn’t much motivation left. The script below was created after I realized that coding is not supposed to be abstract and an interesting example has to be used.
If people are interested (please tell me in the comments!) I will consider to create a python tutorial for linguists that will start right a way with small scripts doing usefull stuff.
by the way you can download the texts that I used for analyzing on the following spots


# this code is licenced under creative commons licence as long as you
# cite the author: Rene Pickhardt / www.rene-pickhardt.de
# adds leading zeros to a string so all result strings can be ordered
def makeSortable(w):
l = len(w)
tmp = ""
for i in range(5-l):
tmp = tmp + "0"
tmp = tmp + w
return tmp
#replaces all kind of structures passed in l in a text s with the 2nd argument
def removeDelimiter(s,new,l):
for c in l:
s = s.replace(c, new);
return s;
def analyzeWords(s):
s = removeDelimiter(s," ",[".",",",";","_","-",":","!","?","\"",")","("])
wordlist = s.split()
dictionary = {}
for word in wordlist:
if word in dictionary:
tmp = dictionary[word]
dictionary[word]=tmp+1
else:
dictionary[word]=1
l = [makeSortable(str(dictionary[k])) + " # " + k for k in dictionary.keys()]
for w in sorted(l):
print w
count = {}
for k in dictionary.keys():
if dictionary[k] in count:
tmp = count[dictionary[k]]
count[dictionary[k]] = tmp + 1
else:
count[dictionary[k]] = 1
for k in sorted(count.keys()):
print str(count[k]) + " words appear " + str(k) + " times"
def differentWords(s):
s = removeDelimiter(s," ",[".",",",";","_","-",":","!","?","\"",")","("])
wordlist = s.split()
count = 0
dictionary = {}
for word in wordlist:
if word in dictionary:
tmp = dictionary[word]
dictionary[word]=tmp+1
else:
dictionary[word]=1
count = count + 1
print str(count) + " different words"
print "every word was used " + str(float(len(wordlist))/float(count)) + " times on average"
return count
def analyzeSentences(s):
s = removeDelimiter(s,".",[".",";",":","!","?"])
sentenceList = s.split(".")
wordList = s.split()
wordCount = len(wordList)
sentenceCount = len(sentenceList)
print str(wordCount) + " words in " + str(sentenceCount) + " sentences ==> " + str(float(wordCount)/float(sentenceCount)) + " words per sentence"
max = 0
satz = ""
for w in sentenceList:
if len(w) > max:
max = len(w);
satz = w;
print satz + "laenge " + str(len(satz))
texts = ["ulysses.txt","faust1.txt","faust2.txt"]
for text in texts:
print text
datei = open(text,'r')
s = datei.read().lower()
analyzeSentences(s)
differentWords(s)
analyzeWords(s)
datei.close()

If you call this script getstats.py on a linux machine you can pass the output directly into a file on which you can work next by using
python getstats.py > out.txt

]]>
https://www.rene-pickhardt.de/data-mining-text-analysis-for-linguists-on-ulysses-by-james-joyce-faust-by-goethe/feed/ 4