Bookmark and Share







     In this article I am going to discuss about complete view of using big data and analytics techniques over vast geological survey data, as they are applied to the oil and gas industry. There is always a need for optimization in the oil and gas exploration and production which includes finding, locating, producing and distribution stages as the gathered data shows how data analytics can provide such optimization. This improves exploration, development, production and renewal of oil and gas resources.
     By using large-scale geological data, applying statistical and quantitative analysis, exploratory and predictive modeling, and fact-based management apporaches over those data, we can gather productive information for decision making also we can gather insights over oil and gas operations.

As found on internet, the three major issues that face the oil and gas industry during the exploration and production stages are:
Data management, These includes storage of a large-scale structured and unstructured of data that can be used for analysis and effective retrieval of information using analysis and statistical methods,
Quantification of data, Thiese includes the application of statistical and data analytics methods for making predictions and determining the insights over those predicted values.
Risk assessment, These includes predictive analysis of the gathered data with known risks that are realized using mathematical models to know how to properly deal with unknown risks

We know that Oil companies are using sensors that are located throughout the oil-field in a distributed manner, high-end communication devices, and data-mining techniques to monitor and track the field drilling operations remotely. The aim is to use real-time data to make better decisions and predict problems.

     As Oil is not found in big, cavernous pools in the ground. It resides in layers of rock, stored in the tiny pores between the grains of rock. Much of the rock containing oil is tighter than the surface on which your computer currently sits. Further, oil is found in areas that have structurally trapped the oil and gas – there is no way out. Without a structural trap, oil and gas commonly migrates throughout the rock, resulting in lower pressures and uneconomic deposits. All of the geological components play an important role; in drilling wells, all components are technically challenging. These data can be gathered pictographically or by means of sensor that results in a large-scale unstructured data.
     
     In these kinds of industries, organizations must apply new technologies and processes that will capture and transform a large-scale unstructured data such as geographical images, sensor data as well as the structured data into actionable insight to improve exploration and production value and yield while enhancing safety and protecting the environment. Oil-Well and field operations are equipped with sensor instruments to capture reading-data to get a 0view of equipment performance and well productivity data including reservoir, well, facilities and export data.
Leading, analytics-driven oil and gas organizations are connecting people with trusted information to predict business outcomes, and to make real-time decisions that help them outperform their competitors. Regardless of the wealth of data and content available today, decision makers are often starved for true insight.
     
     The processes and decisions related to oil and natural gas exploration, development and production generate large amounts of data. The data volume grows daily. With new data acquisition, processing and storage solutions – and the development of new devices to track a wider group reservoirs in a field, machinery and employees performance.

     While surfing on interner the following are three big oil industry problems that consume money and produce data, where the BigData, data mining and analytic techniques can give their better insights to reduce the risk factors:
1. Oil is hard to find. Reservoirs are generally 5,000 to 35,000 feet below the Earth’s surface. Low-resolution imaging and expensive well logs (after the wells are drilled) are the only options for finding and describing the reservoirs. Rock is complex for fluids to move through to the wellbore, and the fluids themselves are complex and have many different physical properties.
2. Oil is expensive to produce. The large amount science, machinery and manpower required to produce a barrel of oil must be done profitably, taking into account cost, quantity and market availability.
3. Drilling for oil presents potential environmental and human safety concerns that must be addressed.
Finding and producing oil involves many specialized scientific domains (i.e., geophysics, geology and engineering), each solving important parts of the equation. When combined, these components describe a localized system containing hydrocarbons. Each localized system (reservoir) has a unique recipe for getting the most out of the ground profitably and safely
So, we can conclude that the oil and gas industry has an opportunity to capitalize on big data analytics solutions. Now the oil and gas industry are in need for educating big data on the types of data the industry captures in order to utilize the existing data in faster, smarter ways that focus on helping find and produce more hydrocarbons, at lower costs in economically sound and environmentally friendly ways



Like many people of a certain age, my first exposure to the term dashboard was when I developed a one for monitoring for corrective and preventive actions!
I have realised that Dashboard design itself is now the essence of simplicity and cutting edge technology, and stylish with it too, arising passions about what makes a great interface for analysis. 
When it comes to software applications and websites, dashboards are around us everywhere too!

The era of Big Data has arrived, but most organizations are still unprepared. Enterprises erroneously believe and act like big data is a passing fad, and nothing has really changed. But big data is not a temporary thing. By acting as if it is, companies are missing out on tremendous opportunities by not focusing on such a great technology.
So what it is?
     Like many of us  know, an enterprise application dashboard is a one-stop shop of information. It’s a page made up of portlets or regions, grouping up related information into displays of graphs, charts, and graphics of different kinds. Dashboards visualize a breadth of information that spreads over a large range of activities in a application or functional area.
There are numerous case studies in explaining how visual representations are locating and leveraging valuable insights from a large set of structured or unstructured data, i.e., big data, are asking better questions, and are making better decisions.

Is it solves the purpose?
Yes! Dashboards when designed to aggregate sturctured and unstructured data into meaningful visual displays and representations, using analytical formulas over available data-sets at the backend to do the analysis and derivation work that users used to do with notepads, calculators or spreadsheets to find what out what’s changed or in need of attention.
Dashboards over a large amount of data enable users to prioritize work and to manage exceptions by taking light-weight actions immediately from the page, or to drill down to explore and do more in a transactional or analytics work area, if necessary.
The design of Dashboards on a very large amount of data, on the other hand, is much more open to interpretation. Most of these Bigdata Dashboards are simply a series of graphs, charts, gauges, or other visual indicators that a user has chosen to monitor, some of which may be strategically important, but others of which may not. Even if a strategic link exists, it may not be clear to the person monitoring the Dashboard, since the Objective statements, which explain what achievement is desired, are typically not present on Dashboards.
Why this?

I found interesting that there is an infographics and a data visualization categories. My interpretation is that the entries in the infographics section are static and illustrated, while those in the data visualization are generated and data-driven.
Nowadays, Bigdata can be used to gain a better insight over Data visualization using superior tools and techniques to present or analyze the available data.
On the other hand, it is economical in terms of space and would probably work in almost every case which are two things that dashboards should be good at. So while I wouldn’t have used it myself I can understand why this decision has been made. What makes a dashboard, or any other information-based design successful, is neither the design execution nor the clever information analysis and visualization technique. 

These kinds of Dashboards, on the other hand eventually, are meant to be useful and to solve a specific problem. Dashboards for business users represent powerful means of communications nowdays when companies build large amounts of data. Those visually compressed representations of only the most important data are used for trackig.
DataViz on my view!

These data visualization can unintentionally bias the viewer as a result of the analyzed choices in visual method, sometimes visualization failing as a result of not understanding your viewers assumptions (cultural for instance, is RED a good or bad color?).
One interesting thing I always think of creating visualizations that discover something with the human eye that can't be discovered by a program. But there will be a challenge showing enough data to give a sense of context while providing enough detail to enable understanding.

What's then?

Whenever a Visualization is done based on Bigdata, once a data visualization designer is aware of simple principles of presenting data on a screen, they can apply them to any report or graph, data analysis or information dashboard without changing it's context or meaning. Only then will it provide a powerful means to make sense of data. When done properly, data visualization will make us think, compare data, read stories out of our data, will put data in the right context and ultimately help decision-makers to make the right decisions regardless of the available type or amount of data.
Do you have any thoughts on this? I am waiting to hear from you!



Why use big data tools to analyse web analytics data?
     Because, Web event data is incredibly valuable. It tells us how our customers actually behave (in lots of detail), and how that varies. We can also do analysis between different customers or for the same customers over time.
    It gives insights on how customer behaviour on our website that drives value. It tells us how customers engage with us via our website / webapp.
      Utilizing the data that web analytics packages provide can help an ecommerce operator improve his business. Understanding how the data is collected can help an operator understand web analytics.

     Web analytics software helps ecommerce operators understand what their online visitors are up to. Which search engine did the visitors come from? How long did they remain on the site? Which web pages did they exit the site from? And so forth.
Ok, What about the limitations?
     There are significant limitations in the way traditional web analytics programmes handle such as Data collection. Sample-based (e.g. Google Analytics), Limited set of events e.g. page views, goals, transactions, Limited set of ways of describing events, Data processing Data access etc.,
    Data is processed ‘once’. Data is either aggregated or available as a complete log file. No validation. No opportunity to reprocess. Data is aggregated prematurely. Only particular combinations of metrics / dimensions can be pivoted together (Google Analytics). Only particular type of analysis are possible on different types of dimension. As a result, data were toiled: hard to join with other data sets as those become unstructured.
     Dive into the data, but don’t push out all information!

But we have to take care over 7 key steps,
1. Target. Define your most important goals and those of your most important Web visitors.
2. Assess. Analyze Web traffic data to uncover what’s happening on your Web site. Identify gaps.
3. Focus. Define the gap you need to close.
4. Plan. Map out a plan to close the gap and measure progress.
5. Act. Execute the plan.
6. Measure. Gauge success using Web analytics.
7. Refine. Don’t stop now! Raise the bar, set new goals, keep climbing.

     There are several methods in which web analytics packages collect data. Each method has its advantages and disadvantages. Analysts needs to understand each of them to follow particular analytics package works.

Shall we take a look!

What is Log file method?
     This process refers to the tracking files that are routinely stored on a web host’s server. These files automatically record visitor behavior (such as time on site, pages visited, exit pages and much more). Hosting companies and webmasters use these files to manage storage and bandwidth issues. but, the log files can also be parsed and analyzed by software and the data produced by that software could help web site owners improve their businesses.
     The log file method tends to be less accurate than the JavaScript method. It is less expensive. A website analyst can typically analyze the log files at no additional cost. log files can exist whether a website operator uses them or not. In that respect, the operator does not have to change his site and add extra code to his site. This is different than the JavaScript method, which requires dditional code to be added to each page of a site and may require other programming changes to it.
What about JavaScript method?
     This method does not require log files at all. Instead, it relies on JavaScript code that is included with each web page. The JavaScript sends visitor activity to a computer that is hosted by the web analytics service provider. The site owner then uses a client viewer or web browser to view the processed analytics for the site.
     Those companies almost universally rely on the JavaScript method. The JavaScript method for our clients because the data is real-time, The data from log files sometimes takes up to a day or two to access, which is frequently too long to wait. The JavaScript method raises privacy issues.
How we an concentrate on Cookies and Privacy concerns?
     That’s because it relies on the placing of cookies on an (oftentimes) unsuspecting visitor’s computer, and then allows an independent company to store and review that data. Although most web analytics vendors remain indifferent to their customers’ data, a few are beginning to use customers’ visitor data to issue press statements about conversion statistics and do not rule out its use for advertising purposes.
     Complicating these privacy concerns is the use of “first party” cookies and “third party” cookies. First party cookies are set directly by the website itself. If a website hires an independent company who itself provides a cookie for a browser when user visits that site, that cookie is called a “third party” cookie. In that instance, the independent company presumably saves the data collected by the cookie and, the website has less control over that data. Third-party cookies, many experts conclude, protect visitors’ privacy less than first party cookies.
How about Aggregate Web Analytics?
     In this the information about the visitors' interactions with a website is processed and transformed into statistical data such as how many visitors were reached, about total number of visitors performed certain actions, page visits, page hits etc.,

Why so? Because, these are all quantitative metrics.
Then, how Aggregate web analytics useful to us?
     This type of statistical data is essential for pattern recognition / trend analysis (understanding the trend) and overall success of a website or e-commerce or web-based business, also it is helpful on measuring efficiency per different segments and categories of visitors.
     But we have to be clear that aggregated metrics can only specifies, if and where something goes wrong on a website. But with these clear result sets, can't predict what happened or why that happened. It provides only reasons, that facilitates to analyze and improve.
How about about Individual Web Analytics?
     The individual visitor tracking is a more personalized method of web analytics, involving in-depth tracking of various aspects of an user's interaction with the website and is referring mostly to eye and mouse tracking systems, user testing or direct visitor feedback.
     Individual web analytics is useful on tracking on-site behavior of the visitors enables you to understand the reasons behind the bounce rate or low revenues (if any) detected using the aggregated data. It provides you with information about the on-page content behavior. But it exactly derives which text on the page captures most attention. But we there is very few chances to find too much help for issues such as increasing your website traffic or improving organic relevancy.
So, what do you think? Have you tried any of these so far? I'd be glad to hear your thoughts!
 

Our Followers

Speak to us !

Creative Commons License [Valid RSS] [Valid Atom 1.0] DMCA.com ScanVerify.com Trust Seal