Sorry for this being late. My internet crashed on me and Cox Communications was anything but helpful for nearly two hours.
What’s the problem? Won’t know until Tuesday! But learn from my experience, you can connect directly to your modem and still get internet, yay!
(If you knew that I envy you.)
I was really looking forward to the activities for this week! While I did not have fun with maps last week (sorry Danielle), I had high hopes for the opportunities to visualize in different ways!
Unfortunately, Java crushed all my hopes and dreams in multiple ways.
I had a hard time following some of the readings, especially Johanna Drucker, but even without completely comprehending what each author was saying 100%, they still helped me get an idea of what I can do with these programs. Just that knowledge going in made understanding the activities that much easier.
Likely inspired by our readings, I chose to create a data set of information for banned books. I thought it would be interesting to see if there were any similarities between certain authors or genres. My data ended up being short. I used the top ten contested books for the years 2010-2014 and included the following information in the data: year contested, title, author, and reasons contested. The most interesting information from this is certain books being repeated multiple years (Fig 1) and the frequency of certain reasons (Fig 2). I displayed this in Palladio with a graph for each. Something interesting I noticed, but is not represented in any graphs, is how reasons for a book to be banned would change over the years. There are some interesting trends about what were hot button issues for each year. If this was my field, that is definitely an aspect of this I would look into.
Figure 1 Figure 2
The next activity taught me that I didn’t have Java on my computer. Now I do. However, I ran into another problem after Java was installed. Gephi will not open on Windows 10, at least not for me (anyone else have Windows 10 and have a different experience?). Thanks to the cliowired hashtag, I gave Cytoscape a try. Unfortunately, this took a lot longer to download than Gephi did and I wasn’t able to use the wonderful steps provided by Brian Sarnacki. Technology is not working for me tonight. Cytoscape would not install because it said I did not have a proper Java version and the version I did download previously was corrupted. I tried uninstalling and then re-installing Java but no dice. I cannot get Gephi to open or Cytoscape to install.
So that’s frustrating.
Next was Voyant. For this, I decided to take ten books from Project Gutenberg in the Native American category. My first attempt of entering the URLs of the plain text in separate lines did not pan out so I created text files for each of them and then uploaded to Voyant to analyze; this worked and I immediately saw the need for help from the documentation. I had no idea what I was looking at. The first step was entering my stop words. The results were giving me too many a, an, the, that as the most common words in each text. I was surprised at how many times I was editing the stop words; there are so many more words that get in the way than I first thought!
When looking through the tools in the documentation, I was curious about the word bubble one. When I clicked “use it” and it took me back to what looked like the normal Voyant page, I understood what the url link input option was for. I discovered that Java does not work in Microsoft Edge so I switched to Firefox. Java security proceeded to block the plug-in from working so I was not able to see the frequency of words displayed in bubbles. The two tools I was able to get to work were Bubble lines and Cirrus. With Bubble lines, I was disappointed that the words I designated as “Stop words” still showed as the most frequent words. I’m not sure if there is something wrong with its reading or me, but other words I entered disappeared so I am confused, that’s for sure. Cirrus is neat, I like this one. It is very colorful and while it also includes some words I designated as stop words, it also has more than only those words. Overall, I could see this being useful for creating a visualization on the topics the soldiers in my project discuss the most.
Lastly, a program running on Java that works for me! It’s a miracle! Mallet GUI was very easy to understand and follow thanks to the Introduction linked in our readings. There is no way I would have known what it was asking as an input and output otherwise! I really enjoyed that it created ten different topics for my different texts. It was interesting exploring each topic and seeing the frequency with which each text appeared in each topic. Some were prominent while others seemed to barely make a dent (a difference from 8923 words to only 20). It is clear that some of these topics are nearly describing an entire text with some extras in there for support. A lot of tweaking needs to be done, and possible choosing texts that have more in common that just being classified under Native American. I would like to try this again but with treaties. I think it would be interesting to see the kind of language used between the government and the tribes.