Interview Questions, Answers and Tutorials

Big Data in the Age of AI (Quiz)

Big Data in the Age of AI (Quiz)

The hype about big data may have peaked several years ago, but big data is far from gone. Instead, it forms the foundation for some of today’s most exciting technologies. Artificial intelligence (AI), machine learning, and data science rely on big data or data that—by its velocity, volume, or variety—can’t be easily stored or analyzed with traditional methods. In this nontechnical course, Barton Poulson digs into the topic of big data, explaining how it works and shapes our modern data universe. Barton explains big data’s relationship to AI, data science, social media, and the Internet of Things (IoT). He goes over some of the ethical issues behind the use of big data. Plus, he covers techniques involved in analyzing big data, including data mining and predictive analytics.

  1. If you have data that comes in different formats, which characteristic of big data are you encountering?
    • velocity
    • volume
    • variance
    • variety
  1. Why is there a critical need for continual training and support for everyone in your organization who is exposed to big data?
    • so they can better understand how to process data in the applications they use
    • so they can better understand how to use big data to inform their own decisions
    • so they can better understand what is being said when others talk about big data
    • so they can better understand how big data can make decisions for them
  1. Which type of graphics will provide the best value when you are visualizing big data?
    • bar charts, histograms, and line charts
    • the most professional-looking visualizations
    • three-dimensional graphics
    • interactive visualizations
  1. How is the growth of big data best described?
    • as vertical growth
    • as circular growth
    • as exponential growth
    • as linear growth
  1. What is the 80/20 rule when working on a big data project?
    • This rule states 80 percent of your time is spent on wild-caught data, and 20 percent on bespoke data.
    • This rule states 80 percent of your time is spent gathering data, and 20 percent is spent preparing data.
    • This rule states 80 percent of your time is spent analyzing data, and 20 percent is spent preparing data.
    • This rule states 80 percent of your time is spent preparing data, and 20 percent is spent analyzing data.
  1. For consumers, what is usually the most important reason to bring big data out of the cloud and into the fog?
    • connectivity issues
    • low latency
    • strengthened privacy
    • reduced server load
  1. Why is it common in data science to break an original data set into a training data set and a testing data set?
    • to control for loss of data in the original data set
    • to control for tendencies toward the role of chance
    • to control for unexpected market situations
    • to control for tendencies toward false positives
  1. What can using big data avoid in terms of interactions with both customers and potential customers?
    • false positives
    • edge cases
    • agile responsiveness
    • unstructured data
  1. How can big data provide your business a competitive advantage in its operations?
    • by allowing you to identify potential benefits and risks to your business
    • by allowing you to identify new patterns and avoid any nuances in the data
    • by allowing you to identify new trends and markets, and start engaging with them immediately
    • by allowing you to utilize the built-in analytics for your business’s websites and social media accounts
  1. You would like to predict the value of future purchases by a particular client. Which method would you use via computer calculation, rather than imagining what a human would do?
    • neural networks
    • K-means clustering
    • decision trees
    • linear regression
  1. Why is it a good idea to get started in your big data journey with open data?
    • Open data is diverse, plentiful, and easily accessible.
    • Open data is small, closely held data sets.
    • Open data does not require understanding context.
    • Open data is company data that you have access to.
  1. In what area of anomaly detection will you look for outliers that have not been previously addressed?
    • biometrics use
    • potential value
    • process failure
    • fraud detection
  1. Organizations already have people in place who know their own data. With this in mind, in which part of the data science Venn diagram are the most difficult skills for someone to learn?
    • context
    • quant
    • code
    • substantive expertise
  1. Why does the minimal approach offer ineffective protection?
    • because it blocks all data other than data that can be used for statistical modeling
    • because it only blocks data that is an answer to a direct question regarding personal characteristics
    • because it only blocks data that contains obvious personal identification characteristics
    • because it blocks all data containing variables correlated with personal characteristics
  1. _________ allow you to create big data applications and run them on basically any kind of architecture.
    • Files
    • Cubes
    • Servers
    • Containers
  1. What are the “three Vs” of big data?
    • velocity, vagueness, and volume
    • volume, velocity, and variety
    • variety, validation, and vantage
    • validation, vitality, and virtue
  1. What is stream processing designed to look for?
    • as much detail as desired
    • static and stationary data
    • quick trends or immediate anomalies
    • patterns, groups, and predictions