Friday, November 25, 2016

Wireless Virtual Reality

There are many different new and amazing developments in modern technology happening everyday. One of them is virtual reality in which people can experience a false reality through the lens of VR goggles. If the development of virtual reality wasn't enough, researchers have developed wireless VR goggles which allows people to be mobile as they experience an alternate world. There was a development in the device because the HDMI chords attached to computers processing data lead to people tripping over chords while they walked around using the goggles.

One of the problems with making the device wireless was the fact that the device originally didn't support advance data processing even with wires and streaming the data without wires would be even harder to support. The researchers decided to try mmWaves which are high frequency waves that were thought to be able to help the device work wirelessly. One downside of these waves are that they do not work well with obstacles which a person is likely to encounter if they don't use the headset in a completely empty room. Researchers at MIT developed a mirror called MoVR that reflects waves at programmable angles instead of reflecting it at the same angle it comes in like it normally would. This allows the VR headset to be used with obstacles and to avoid losing signal. 

The researchers have been able to get the VR device to work but are looking to make the hardware more compact as it is much larger than a person would feel comfortable having on their face. They are also looking to make the waves compatible in order to have the ability to allow multiple devices in a room for multiplayer virtual worlds to exist wirelessly. 

References:

https://www.eecs.mit.edu/news-events/media/enabling-wireless-virtual-reality

http://www.news.com.au/technology/gadgets/wearables/review-samsungs-virtual-reality-glasses-gear-vr-are-really-here-but-are-they-really-worth-buying/news-story/8055338addf88802c5ba1b913242c42d

Friday, November 18, 2016

Stanford University Using Language Analysis for Crisis Hotlines

Many people around the world suffer from different mental health disorders. Among those disorders is depression and anxiety which are two diseases that can affect how a person functions on a day to day basis with relationships, academics, and work life. Crisis hotlines have been put in place to help people suffering with these mental health issues so that they can talk through any bad thoughts they may be having.
There has been a recent emergence in crisis hotlines that can be reached via text. Now instead of calling the hotline and talking to a stranger, people with mental health issues can text throughout the day about how they may be feeling. Graduate students at Stanford University have analyzed hundreds of thousands of texts from the thousands of text conversations between  people with mental health disorders and the counselors at the crisis hotline. They were looking to find a way to determine whether a textual conversation had been effective or not. The researchers looked at natural language analysis to determine whether a certain way of texting improved the way the person felt after the conversation.

The researchers found that the successful conversations all had five stages to them (Introduction, problem setting, problem exploration, problem solving, and wrap up of the conversation) that were all marked by key words. The researchers are hoping that by analyzing the language of the crisis counselors, they may be able to generate an automated counseling system to increase the amount of people that can be helped. They are also hoping to use artificial intelligence to make the automated counseling seem more human-like and approachable for a person who has a mental health disorder.

Friday, November 11, 2016

MIT Develops Autonomous Scooter

With self driving cars on the rise, it is quite easy to forget that there are other forms of transportation that can be used hands free. A new vehicle to join this group is autonomous scooters. Scooters used by the elderly and disabled have now been developed to be hands free for easier use by the rider. The software for the scooter was developed by MIT 's artificial intelligence and Computer Science Lab with help from the National University of Singapore and the Singapore-MIT Alliance for Research and Technology. This is helpful technology because it allows people who are mobility impaired to have easier forms of transportation. The scooter can be used both indoors and outdoors but the researchers at the university are working on making the scooter maneuver around tight spaces.
The researchers has many layers of software. There is a low level control algorithm that allows the vehicle to respond immediately to changes in the surrounding environment. This includes avoiding objects and pedestrians in the path of the scooter. There is also a route planning algorithm that the vehicle uses to figure out where it is on a map. The control algorithm for the scooters is also used for golf carts and city cars which is beneficial because it allows for uniformity and easier understanding of the systems. In addition, this uniformity allows for information to be transferred easily across vehicles and reduces the complexity when developing the vehicles.

These autonomous scooters are a great development for our increasingly disabled-friendly world. Now not only will we have doors that open automatically for wheelchairs like most buildings have implemented, but we will now have transportation for people lack mobility so they too can also lead an autonomous lifestyle.

References:

https://www.eecs.mit.edu/news-events/media/driverless-vehicle-options-now-include-scooters

http://www.theonion.com/graphic/new-tandem-mobility-scooter-released-33043

Friday, November 4, 2016

Data Corruption

If you have read any of my previous blog posts, you may have seen that I have already discussed data analysis in depth. If you are unaware of data analysis or have forgotten what it is, it is simply the process of analyzing and modeling data to find out useful information and make conclusions about the data. At MIT, there are researchers in the Computer Science sector that have created a new set of algorithms that “can efficiently fit probability distributions to high- dimensional data” (MIT, 2016). This is helpful because many of the apps and websites we use everyday are high-dimensional data and knowing how to solve their corruption, if it happens to occur, is very beneficial to making our lives easier.

How data works is that if it contains corrupted lines, it could lead to the the standard data- fitting technique breaking down and causing the data to not function properly. Having data with many dimensions, with an immense amount of lines of code, makes having any form of a corruption much harder to detect and correct. The researchers at MIT found that using the median to find the mean of the data is less likely to yield corrupted data than an algorithm that uses the average. They took this into consideration when trying to form an algorithm. 

Commonly, Computer scientists often use 2-D cross sections of the graph of the data to test whether or not they look like “Gaussian distributions”. Gaussian distributions are continuous functions that estimate the exact binomial distribution of events. Data that does not look like Gaussian distributions likely has corruption within it. They used the concept of Gaussian and combined it with a common distribution called "product distribution" and used it to create an algorithm with efficiency and applicability to the real world as its central focus. 

References:
https://www.eecs.mit.edu/news-events/media/finding-patterns-corrupted-data

https://en.wikipedia.org/wiki/Data_analysis

Friday, October 28, 2016

Voting Machines and Their Lack of Safety

With the hilarious, sad, and downright generally lacking presidential election that is occurring, it is hard to not pay attention as this election process plays out. November 8th is coming and this is probably one of the most important elections for eligible voters to actually get out to vote in.

Researchers at Princeton University, in addition to some Computer Science graduate students, worked to see how they could hack a voting machine to change the outcome of a casted vote to test the security of the voting system.  A professor, by the name of Andrew Appel, and many other professors and graduate students showed how they could hack the AVC (Advantage Voting Machines) used in many states. They found that when they studied the source code of the AVC Advantage that it does not follow best software engineering practices and that the Independent Test Authority report does not accurately and sufficiently assess the security of the AVC Advantage. There were at least two program bugs that slipped through the ITA review according the professors.

There were also some ‘user interface design flaws’ of the AVC Advantage which has the potential to cause inaccuracy in recording votes. Ballots are prepared and results are tallied with a Windows application called “WinEDS” that runs on computers. ‘The votes cast on an individual machine are recorded in the same cartridge, which poll workers bring to election headquarters after polls close. The voting machines are left at the polling places for a few days until the trucking company picks them up at election headquarters in each county’. This allows ample time for a dedicated and fully intentioned hacker to do their work. In addition, the source code of the WinEds application appears to be written by another company and sold to AVC Advantage. This could lead to loss of accuracy and reliability because the company that creates and issues the voting machines didn’t even write the code it uses which could lead to some holes in security.

When it comes to something such as voting for a president for one of the most influential countries in the world, there should be greater research into how to make the voting system safer so that it can truly reflect a Democracy.

References:





Friday, October 21, 2016

Algorithm Connecting Students at MIT

There are approximately seven billion people in the world who live both near and far from us. With advancements in technology we are able to connect with people from all walks of life and just about every part of the world. Two MIT graduate students, Mohammad Ghassemi and Tuka Al-Hanai, are trying to get in on the trend of people wanting to connect over their electronic devices. They created an algorithm that connects students at MIT for friendly lunch dates to meet people all across campus that they likely wouldn’t meet otherwise.
Image result for CONNECTIONS

They first started with a Google doc which they sent to the student body so that they could sign up for said lunch dates once a week for the semester. The form is essentially a survey that asks you questions to test your compatibility with another person. Both of the students had experience with the branch of Computer Science involving artificial intelligence and they developed an algorithm together for their project that they call ‘Maven’. The algorithm involves link analysis, which you can read about in one of my previous blogs, to analyze the links made between two people. The more connections, the higher chance of two people being matched together.


Many people at the University say that they enjoy this program as it allows them to make friends easier and to not have the fear that freshmen often feel of going to an event by themselves. The love of this program is shown as,“93 percent of participants said that they rate the program four or above”. Hopefully this can be brought to the University of Richmond to help students acclimate better to campus life.

References:
https://anniecoops.com/tag/connections/

https://www.eecs.mit.edu/news-events/media/algorithm-connects-students-most-interesting-person-theyve-never-met

Friday, October 14, 2016

Developments at MIT: Automated Screening for Childhood Communication Disorders

Children with speech and language disorders, especially under the age of six, often do not  have their disabilities caught early due to lack of identification of the issue from parents and teachers. If the disorders are not caught early in the child’d development, it can lead to academic and social anxiety as the children become older. It is a fact that 60% of kids go undiagnosed until after kindergarten which is an unnecessarily high number. MIT’s researchers at the Computer Science and Artificial Intelligence Laboratory are trying to reduce that percentage by generating a computer system that can automatically screen young children for speech and language disorders. The team of computer scientists have made steady progress but have not  yet completed their work. 

The system works by first analyzing audio recordings of children’s performances on standardized storytelling tests. The scientists plan on making the screening of the children speech completely automated and possibly making it accessible through phones and tablets for low-cost screening for large amounts of children. Two graduate students in electrical engineering and computer science at MIT used machine learning (which you can read about in one of my previous blog posts) to search through large sets of training data for patterns that correspond to particular classifications. The graduate students identified 13 acoustic features of children speech that their machine learning system could search and correlate to a specific disorder. The machine learning was trained on three different tasks: identifying any impairment, identifying language impairments and identifying speech impairments. 

There was an issue with considering age and gender as those both can affect how a child speaks. One of the graduate students used a statistical analysis mechanism called residual analysis to identify correlations between subjects age and gender and the features of their speech. The student then altered the correlations before she fed the data to the machine learning algorithm. This advancement could lead to more children having their speech disorders corrected before it becomes a large negative part of their lives.

References:

Thursday, October 6, 2016

Tesla's New Innovation: the Model X


Of the many car brands available to the public for consumption, Tesla, in my opinion,  has had the most ground breaking innovations presented within the last 10 or even 20 years. They have recently come out with a new masterpiece called the Tesla Model X that is a beautiful, sleek car with more to it than meets the eye. It is the fastest sport utility vehicle in history with 289 miles of range and can accelerate from zero to 60 miles in 2.9 seconds. In addition, the car has a 100 kWh battery which means it takes no gas and is pollution free. The car has falcon wings which means that the doors open up onto the roof of the car instead of to the side like standard cars and also has a panoramic windshield which allows the passengers and driver to have a larger view of the outside world than they would in a mundane car.

This car also has the very cool feature of autopilot which matches highway speeds and stop-and-go traffic with ease and can even scan for parking spaces and parallel park the car for you! This feature uses a camera, radar, and ultrasonic sensors to work properly. There are 12 ultrasonic sensors that are placed around the bumpers and sides of the car and can detect objects up to 16 feet away. These sensors, the front radar camera, and the GPS of the car all collect data that is combined in the system of the car to be able to drive without hitting anything in its vicinity. The car is coded to have a predetermined scope in which it identifies the range of space in which surrounds the car to make sure that the car is moving in a direction with nothing in front of it for at least some distance. Essentially it looks to see "if'" there is something in the way of the car then drive the car out of the way or stop the car, "else if" keep driving down same path. 

This model is my favorite of all the Tesla models as it is an SUV that has a lot of space, is versatile in harsh weather conditions, and is still energy and fuel efficient. If you would like to learn more my references below are very informative when looking at the new model so feel free to check them out!

References:


Friday, September 30, 2016

Machine Learning and Artificial Intelligence


From movies like  I, Robot and the Transformers series to having Siri on our iPhones, we are constantly exposed to the idea and presence of artificial intelligence. Our world is quickly moving towards having more and more technology that can interact with humans on an almost human level. Machine Learning is a subfield of computer science that is derived from pattern recognition and computational learning theory.  Machine learning explores the making of algorithms that can learn and make predictions from data in the same way humans do when we think. 

Artificial intelligence is a version of Machine learning and is defined as being, “the theory and development of computer systems able to perform tasks that normally require human intelligence”. Algorithms like this usually build a model from example inputs from the user to use to make predictions and decisions. Machine learning is often used to predict in commercial use in another subfield known as predictive analytics. This allows engineers and financial analysts to uncover hidden insights through understanding the trends in the data for use in many financial situations.

Machine learning is categorized into three different subsets: Supervised learning, Unsupervised learning, and Reinforcement learning. Supervised learning is when the user inputs both example inputs into the computer and the desired outputs and the computer is tasked with learning a general rule that maps the inputs and outputs. On the other hand, Unsupervised learning is when no label is given to the program and the algorithm has to find structure in its input. Finally, Reinforcement learning is when the program interacts with the environment to reach a certain goal without the guidance of the user. All three types of Machine learning are very common and are used on many of our computational devices. If you would like to learn more about artificial intelligence please feel free to check my references below as they are very helpful with explaining the different concepts and even the history behind artificial intelligence. 

References:

http://softwarefocus.net/technology/ten-misconceptions-about-artificial-intelligence.html


Friday, September 23, 2016

What is Data Compression?


Imagine you have a handful of soaking wet paper towels. You squeeze the handful with considerable force into a small ball of paper towel, with water leaving the towel as you did this. This is an odd, yet good example I believe, that explains compression. As students of computer science and having turned in four programming assignments that include multiple .java files, we are very familiar with .zip files. If you are unaware of what a .zip file is, it is simply a file that compresses other files into one compact file. Data compression works by reducing the amount of data needed for the storage or transmission of a given piece of information. It is is a critical aspect of our computing devices as it allows us to transmit large quantities of data over communication networks. This technique to compact data is very similar to Morse code which assigned the shortest codes to the most common characters.

Data compression is usually categorized into two subsets: lossless and lossy. Lossless data is exact and can be reversed to yield the original data. Lossy, on the other hand, is inexact and can lose detail or yield errors when data is reversed back from compression.The wet paper towel example would be considered lossy since you squeezed water from the towels as you compressed them and if you were to detangle the paper towel ball you would have much less water in it, which is like losing data. The .zip files we use for our assignments are lossless which is great because it means that our assignments will never have any problems when they are turned in! In addition, a great thing about data compression is that it can usually compress images by factors of 10 to 20+. This means that we get to store more pictures of our cats and dogs and family members on our computer to post on Facebook!

Data compression is a very important part of our computing systems and, although it is not as simple as squeezing wet paper towels, it is still a good thing to  know about especially as computer science students.


References:

https://www.britannica.com/technology/data-compression#ref886796

https://medium.com/@_marcos_otero/the-real-10-algorithms-that-dominate-our-world-e95fa9f16c04#.azqyazfwh

Photo Reference:

http://www.gitta.info/DataCompress/en/html/CompIntro_learningObject2.html




Friday, September 16, 2016

The Secure Hash Algorithm



Have you ever entered your bank card number or social security information into a website to buy something online or to register to vote? Have you ever inputed your phone number or email address into an app when creating a profile? Websites, apps, and many other mediums that use the internet tend to be a hub for hackers to infiltrate and acquire personal information about people. Today, more than ever, our computing devices need to be kept secure. Hackers and viruses are on the rise and it is important to protect ourselves and the information we store on said devices. 

The Secure Hash Algorithm is a group of cryptographic algorithms created by the National Institute of Standards and Technology (NIST) that help to determine that you download what you want and not a virus or a portal for a hacker to enter your computing devices. It also makes sure that the information that you enter into mediums that use the internet is secure. Many apps and websites use this algorithm to protect our information whether it be for our personal information security or as legal protection for themselves so that they are not liable for any of our inputed information being hacked if happened to be.

The millennial generation is known for their tendencies to press "accept" when asked to agree to terms and conditions, or anything of that nature, without reading them. We don't know for a fact if the app or website we are using is secure enough for us to put our information in or if the medium even guarantees safety of your information. Thanks to The Secure Hash Algorithm we mostly don't have worry about the information we enter into mediums that use the internet that are widely known or are the websites of large corporations as most are usually secured by this algorithm or a similar group of algorithms.

References:

https://www.hackingtrainer.com

https://medium.com/@_marcos_otero/the-real-10-algorithms-that-dominate-our-world-e95fa9f16c04#.azqyazfwh

Friday, September 9, 2016

Link Analysis and Page Rank in Social Media and Search Engines

Have you ever wondered how Google knows which search results to show you when you research something? Have you wondered how Facebook suggests people you may know and may want to send a friend request to? Google, Facebook and many other websites use link analysis and page rank to do this. Link analysis is a computational concept widely used by search engines and social media as a means to form connections and link what we do on the internet to other things we may do on our computational devices. Link analysis is very prevalent in such things as internet cookies in which ads from websites that you may have visited often show up on the side bars of your computer because it remembers that you visited that website and links your computer to the website. It is also prevalent, for example, on Facebook when you see mutual friends as possible people to add. Link analysis looks at your current friends and your personal settings (such as which university you went to or the area you live in) and finds a link between you and other people that have that friend or other aspects in your profile in common. 


Link analysis can be described as being a graph in a matrix. The structure of the graph in the matrix can help the computer assess the importance of each node and essentially whether or not it would be relevant if it were presented to the user. The mathematics behind link analysis can be very extensive and confusing therefore the picture below shows the concept of link analysis in a non computational way to help visualize what it actually is in simpler terms.

Link analysis is also related to page rank in search engines. Link analysis allows the search engine to remember what you and other people have searched regarding a certain topic and rank it in a way that makes the most related, and also most visited, page of the topic show up at the top of your search. Google and most search engines use the concept portrayed in the diagram below in their page ranks. Page ranks are easiest to explain using examples therefore page C and page E will be used as examples. Page C has less links to it than page E yet page C has a higher percentage chance of being visited because the few links to page C are deemed more important than the many links to page E. This may be because page C was determined to be more related to the topic searched even though there are less links to it. Therefore the number of links to a page does not automatically mean that it is going to be ranked the highest as content quality also matters.

Link analysis and Page Rank are very helpful sources that we use in our day-to-day lives that make researching and finding new friends to add much easier. If you would like even more clarification or elaboration on these topics please look at my references they are very helpful!

References:


https://en.wikipedia.org/wiki/PageRank#History

http://pr.efactory.de/e-pagerank-algorithm.shtml

Thursday, September 1, 2016

Computer Science in MRI Scans

MRI (Magnetic Resonance Imaging) scanners are some of the most groundbreaking technologies used in modern day medicine. If you have ever had an MRI scan then you would remember the large tube-like machine that you were slowly slid into and told to lay completely still in for likely a very long time. 
MRI’s use magnetic fields to produce detailed images of our tissues and organs. They are helpful as they can help detect abnormalities in our tissues and organs and can be used as a preventative measure for many diseases and conditions. Computers are used as a way to control the magnetic field in MRI's and retrieve the information of the body scan to transform it into a relevant image. A major issue that was present with MRI’s was how long a patient would have to lay still in the scanner which was up to about 45 minutes. MIT actually came up with an algorithm to reduce that time to about 15 minutes which has greatly benefited both patients and doctors in terms of saving valuable time. MRI’s produce many scans during their process and were programed to start from the beginning when they started a new scan in the body, which caused the machine to take a long time. 
MIT created an algorithm that would use the information from the first scan that the MRI acquires and use it as a basis for the rest so that it would take less time. In scans successive to the first scan, the algorithm tries to predict the formation of a tissue but does not assume it as  that would risk losing important details that show up in later contrast images. Although the image quality is slightly less than if the MRI were to take the full 45 minutes, it is still a valid way of scanning a patient and is helpful in saving both doctor and patient some time.

Writing References:
http://news.mit.edu/2011/better-mri-algorithm-1101

https://prezi.com/gx2sq08ffvzb/computer-science-and-health-computer-science-in-mri-technology/

Image References:
http://www.radiologyinfo.org/en/info.cfm?pg=bodymr

http://news.mit.edu/2011/better-mri-algorithm-1101

Algorithms That Have Made Digital Maps Possible

As a society we are accustomed to the fact that we have a map of the world on our electronic devices and can find the fastest distance to any known place on Earth. We often take this for granted and don't think about the extensive amount of time spent forming the algorithms that make the resource work the way it does. Edsger W. Dijkstra, the man who essentially developed the ‘skeleton’ code for the program, used a ‘shortest path’ problem which, as you can likely tell from its name, finds the shortest path between two points on a graph. This is helpful as we usually to try to find the shortest route between two places, with the graph being the grid like system most streets are allocated into. ‘Shortest path’ problems tend to get very complicated very quickly as there are many different combinations that can grow exponentially. Dijkstra came up with a simplified version that is explained in the gif below. 


The gif shows that a computing device first looks at all optional routes from the starting point “1” and takes the shortest route from there. It continues to reassess the shortest distance from the next point as it progresses. Although Dijkstra’s algorithm isn't the only one used in modern maps, it was certainly a great starting point and an amazing way of helping evolve maps into the kind we use today. 

Another method used to find the shortest distance that combines Dijkstra’s algorithm is ‘A*’. ‘A*’ uses the idea of recording areas that are already evaluated, which are areas that have already been explored by the algorithm, and recording areas immediately adjacent to those evaluated. It then calculates the distance from the starting point and an estimated distance to the goal point. 

The 'A*' algorithm in conjunction with Dijkstra’s algorithm are two of the most popular algorithms used by modern digital maps and are undeniably two of some of the most useful algorithms to have been created.

References: