top of page

Crowd Sourced Online Survey

Lacking access to an easily surveyed pool of potential respondents, we consulted existing literature to determine the viability of conducting a survey using an existing contract labor portal such as Amazon's Mechanical Turk version of crowdsourcing [1].  We were pleased to discover such crowdsourcing services, "can be a potentially viable resource for researchers wishing to collect survey data" [2].  Thus, we put together our thirteen-question survey, which contained two main questions of importance:  Q3 – to determine how respondents would rank our existing proposed functionality list; and Q4 – an open-ended question to elicit respondents to add desired functionalities not currently in our proposed functionality list from Q3.

​

Our budget of $0.45 to $0.50 per survey response allowed our team to receive 96 survey responses.  We were able to determine the geographical locations of respondents by uploading respondent longitude and longitude data into Google Earth Pro – roughly 56 (58%) of all respondents were in the Northern United States, 5 (0.5%) were in India, 2 (0.2%) were in South America, 1 (0.1%) were in France, 1 (0.1%) were in Zimbabwe, 1 (0.1%) were in Qatar, 1 (0.1%) were in France, 1 (0.1%) were in England, and 1 (0.1%) were in France.

​

Our team attempted to increase the validity of our crowdsourced responses to the Q3 ranking question by including, "the least important feature," as one of the items to be ranked.  Thus, we disregarded respondents who did not rank "the least important feature" at the bottom of the list.  Hence, of the 96 total respondents, only 59 surveys were used when assessing respondent's desired functionalities.

​

View the survey questions by clicking here

​

Consolidating the Q4 Open-Ended Responses

 

When coding the open-ended responses to survey Q4, we first discarded all responses which did not include functionality suggestions.  Next, we discarded suggestions which repeated functionality included in the existing proposed functionality list provided in Q3.  However, since our Q3 list did not explicitly include "GPS coordinates," we decided to retain and code such responses along with references to "location" information.  Finally, because no single non-repeated response proved of interest, responses that were not suggested by at least two respondents were also discarded.

 

 

Emerging Themes of the Q4 Open-Ended Responses

 

We coded the Q4 responses into twenty keys (such as Call 911, GPS, Location, SMS, Recordings).  We further coded these keys into ten themes (such as Broadcast, Recording, UI, Security, Weather).  For example, the most requested theme is the "Broadcast" theme, which includes the keys: Call 911, GPS, Help, Location, and SMS.  Such responses included, "The ability to call 911 since mobile service is not required with a feature like this." and "Finding the exact location during eas warning."  The logic underlying this thematic grouping is that each of these functionalities entails the application information leaving the hardware running the PathShadow application.  The next two largest themes to emerge are Disable and EAS Levels.  The former allows users to turn on or off the application easily; the latter provides users with further information about the current EAS message.  The two themes which are next in importance are Recording and UI.  The former includes requests for the PathShadow application to allow users to record voice, image, or video; the latter includes any respondent request for ease of application usability.  Finally, the last five – all with only two respondents each – include such requests as Security hopes, auto-updates, and a link to third-party Weather applications.  For example, respondents mentioned "Automatic updates" and "THE WEATHER CHANNEL APP."

 

  1. Broadcast (22) – Responses that mention functionality which requires the app to send an external response (such as auto-call, call, call 911, send GPS, help, location and SMS).

  2. Disable (4) – Responses that mention the ability to turn on or off the app.

  3. EAS Levels (4) – Responses that mention more detailed EAS warning messages.

  4. Recording (3) – Responses that mention voice, image, or video recording.

  5. UI (3) – Responses that mention the app's usability (customizable and ease of use).

  6. Non-Local (2) – Responses that mention tracking a person who is not local to the person doing the tracking.

  7. Security (2) – Responses that mention the security of the app.

  8. Siren (2) – Responses that mention the ability to find a person who is within hearing distance.

  9. Updates (2) – Responses that mention either automatic app updates or emergency updates.

  10. Weather (2) – Responses that mention being able to access a third-party weather app from within the app.

 

       Note:  The parenthetical numbers are the total number of code keys which are included in each theme.

​

​

See the detailed survey results here

bottom of page