Free Essay

Big Data

In: Computers and Technology

Submitted By krishmellempudi
Words 2026
Pages 9
Originis of oop: Polymorphism of Operations (Operator Overloading). Another characteristic of OO systems in general is that they provide for polymorphism of operations, which is also known as operator overloading. This concept allows the same operator name or symbol to be bound to two or more different implementations of the operator, depending on the type of objects to which the operator is applied. Multiple Inheritance and Selective Inheritance. Multiple inheritance occurs when a certain subtype T is a subtype of two (or more) types and hence inherits the functions (attributes and methods) of both supertypes. For example,we may create a subtype ENGINEERING_MANAGER that is a subtype of both MANAGER and ENGINEER. Selective inheritance occurs when a subtype inherits only some of the functions of a supertype. Other functions are not inherited Charactrstc of ood: an ODMS provides a unique identity to each independent object stored in the database. This unique identity is typically implemented via a unique, system-generated object identifier (OID).Immutable – They do not change. An OID can be used only once.The main property required of an OID is that it be immutable; that is, the OID value of a particular object should not change. an ODMS must have some mechanism for generating OIDs and preserving the immutability property. It is also desirable that each OID be used only once; that is, even if an object is removed from the database, its OID should not be assigned to another object. These two properties imply that the OID should not depend on any attribute values of the object, since the value of an attribute may be changed or corrected. . The three most basic CONSTRUCTORS are ATOM, STRUCT (or tuple), and COLLECTION. . One type constructor has been called the atom constructor. This includes the basic built-in data types of the object model, which are similar to the basic types in many programming languages: integers, strings, floating point numbers, enumerated types, Booleans, and so on. They are called single-valued or atomic types, since each value of the type is considered an atomic (indivisible) single value. A second type constructor is referred to as the struct (or tuple) constructor. This can create standard structured types, such as the tuples (record types) in the basic relational model. Collection (or multivalued) type constructors include the set(T), list(T), bag(T), array(T), and dictionary(K,T) type constructors. These allow part of an object or literal value to include a collection of other objects or values when needed. These constructors are also considered to be type generators Transient objects exist in the executing program and disappear once the program terminates. Persistent objects are stored in the database and persist after program termination. The typical mechanisms for making an object persistent are naming and reachability.An extent is a named persistent object whose value is a persistent collection that holds a collection of objects of the same type that are stored permanently in the database. The objects can be accessed and shared by multiple programs. It is also possible to create a transient collection, which exists temporarily during the execution of a program but is not kept when the program terminates. Polymorphism and operator overloading. Operations and method names can be overloaded to apply to different object types with different implementations.Polymorphism of Operations (Operator Overloading). Another characteristic of OO systems in general is that they provide for polymorphism of operations, which is also known as operator overloading. This concept allows the same operator name or symbol to be bound to two or more different implementations of the operator, depending on the type of objects to which the operator is applied. Semantics is the study of meaning in language. It can be applied to entire texts or to single words. The semantics of a relation refers to its meaning resulting from the interpretation of attribute values in a tuple. Design a schema that can be explained easily relation. The semantics of attributes should be easy to interpret. Anomalies: Database anomalies are the problems in relations that occur due to redundancy in the relations. These anomalies affect the process of inserting, deleting, and modifying. Insert Anomalies: An Insert Anomaly occurs when certain attributes cannot be inserted into the database without the presence of other attributes. For example - we can't add a new course unless we have at least one student enrolled on the course. Delete Anomalies:A Delete Anomaly exists when certain attributes are lost because of the deletion of other attributes. For example, consider what happens if Student S30 is the last student to leave the course All information about the course is lost. Modification Anomalies:An Update Anomaly exists when one or more instances of duplicated data is updated, but not all. For example, consider Jones moving address - you need to update all instances of Jones's address. Problems with NULLs: Wasted storage space, Problems understanding meaning,Avoid placing attributes in a base relation whose values may frequently be NULL, If NULLs are unavoidable, Make sure that they apply in exceptional cases only, not to a majority of tuples A spurious tuple is, basically, a record in a database that gets created when two tables are joined badly. In database-ese, spurious tuples are created when two tables are joined on attributes that are neither primary keys nor foreign keys. Spurious tuples: Tuples generated by joining two relations on attributes that are not keys or foreign keys on these relations.How can spurious tuples be prevented? If original relations are separated using the primary key. This will enforce the join to be on primary/foreign keys. OBJECTS and LITERALS are the basic building blocks of the object model. The main difference between the two is that an object has both an object identifier and a state (or current value), whereas a literal has a value (state) but no object identifier In either case, the value can have a complex structure. An object has five aspects: identifier, name, lifetime, structure, and creation. 1.The object identifier is a unique system-wide identifier 
Every object must have an object identifier. 
2.Some objects may optionally be given a unique name within a particular ODMS—this name can be used to locate the object, and the system should return the object given that name. 3.
 The lifetime of an object specifies whether it is a persistent object). Lifetimes are indepen- dent of types—that is, some objects of a particular type may be transient whereas others may be persistent.4 The structure of an object specifies how the object is constructed By using the type constructors. The structure specifies whether an object is atomic or not. 5. Object creation refers to the manner in which an object can be created. There are three types of LITERALS: atomic, structured, and collection. Atomic literals correspond to the values of basic data types and are prede- fined. The basic data types of the object model include long, short, and unsigned integer numbers. Structured literals correspond roughly to values that are constructed using the tuple constructor .3. Collection literals specify a literal value that is a collection of objects or val- ues but the collection itself does not have an Object_id. Functional dependency:A functional dependency is a constraint (associated with table) between two sets of attributes from the database. Ex. If R is a relation with attributes X and Y, a functional dependency between the attributes is represented as X->Y, which specifies Y is functionally dependent on X. Functional dependency is a property of the semantics or meaning of the attribute,The relation extensions that satisfy the functional dependency constrains are called legal relation states of R,The main use of functional dependency is to describe for a relation schema R by specifying constrains on its attributes t hat must hold at all times.,A functional dependency of the relation schema(R), not of a particular legal relation state r of R . Primary storage media can be operated on directly by the computer’s central processing unit (CPU), such as the computer’s main memory and smaller but faster cache memories whereas secondary storage ‘s Data cannot be processed directly by the CPU; first it must be copied into primary storage and then processed by the CPU. Primary storage usually provides fast access to data but is of limited storage capacity. they are still more expensive and have less storage capacity Secondary storage devices usually have a larger capacity, cost less, and provide slower access to data than do primary storage devices.Data in secondary or tertiary storage cannot be processed directly by the CPU; first it must be copied into primary storage and then processed by the CPU. Ex ps: This category includes magnetic disks, optical disks (CD-ROMs, DVDs, and other similar storage media), and tapes. Ex ss: Hard-disk drives are classified as secondary storage The main goal of RAID is to even out the widely different rates of performance improvement of disks against those in memory and microprocessors.12While RAMcapacities have quadrupled every two to three years, disk access times are improving at less than 10 percent per year, and disk transfer rates are improving at roughly 20percent per year. Disk capacities are indeed improving at more than 50 percent peryear, but the speed and access time improvements are of a much smaller magnitude.A second qualitative disparity exists between the ability of special microprocessors that cater to new applications involving video, audio, image, and spatial data processing, with corresponding ack of fast access to large, shared data sets.The natural solution is a large array of small independent disks acting as a singlehigher-performance logical disk. A concept called data stripingis used, which utilizesparallelism to improve disk performance. Data striping distributes data transparently over multiple disks to make them appear as a single large, fast disk. In SQL the following types of PRIVILAGES can be granted on each individual relation R: SELECT (retrieval or read) privilege on R. Gives the account retrieval privilege. In SQL this gives the account the privilege to use the SELECT statement to retrieve tuples from R. Modification privileges on R. This gives the account the capability to modify the tuples of R. In SQL this includes three privileges: UPDATE, DELETE, and INSERT. These correspond to the three SQL commands for modifying a table R. Additionally, both the INSERT and UPDATE privileges can specify that only certain attributes of R can be modified by the account. References privilege on R. This gives the account the capability to reference(or refer to) a relation R when specifying integrity constraints. This privilege can also be restricted to specific attributes of R. GRANT is a command used to provide access or privileges on the database objects to the users. grant a user edit privileges (SELECT, UPDATE, INSERT, and DELETE), which allows the user to both view and modify the contents of a dataset. In SQL the following types of privileges can be granted on each individual relation R: SELECT (retrieval or read) privilege on R. , Modification privileges on R. , References privilege on R. The REVOKE command removes user access rights or privileges to the database objects. In some cases it is desirable to grant a privilege to a user temporarily. For example, the owner of a relation may want to grant the SELECT privilege to a user for a specific task and then revoke that privilege once the task is completed. Hence, a mechanism for revoking privileges is needed. In SQL a REVOKE command is included for the purpose of cancelling privileges. DIFF OF IR and Databases : 1.Structured data 
2.Schema driven 
3.Relational (or object, hierarchical, and 
network) model is predominant 
4.Structured query model 
 5. Rich metadata operations 
 6.Query returns data 
 7. Results are based on exact matching (always 
correct) 
 IR Systems : 1.Unstructured data 
 2.No fixed schema; various data models 
(e.g., vector space model) 
3.Free-form query models 
4.Rich data operations 
5.Search request returns list or pointers to 
documents 
6. Results are based on approximate matching 
and measures of effectiveness (may be imprecise and ranked)…...

Similar Documents

Free Essay

Big Data

...McKinsey Global Institute June 2011 Big data: The next frontier for innovation, competition, and productivity The McKinsey Global Institute The McKinsey Global Institute (MGI), established in 1990, is McKinsey & Company’s business and economics research arm. MGI’s mission is to help leaders in the commercial, public, and social sectors develop a deeper understanding of the evolution of the global economy and to provide a fact base that contributes to decision making on critical management and policy issues. MGI research combines two disciplines: economics and management. Economists often have limited access to the practical problems facing senior managers, while senior managers often lack the time and incentive to look beyond their own industry to the larger issues of the global economy. By integrating these perspectives, MGI is able to gain insights into the microeconomic underpinnings of the long-term macroeconomic trends affecting business strategy and policy making. For nearly two decades, MGI has utilized this “micro-to-macro” approach in research covering more than 20 countries and 30 industry sectors. MGI’s current research agenda focuses on three broad areas: productivity, competitiveness, and growth; the evolution of global financial markets; and the economic impact of technology. Recent research has examined a program of reform to bolster growth and renewal in Europe and the United States through accelerated productivity growth; Africa’s economic......

Words: 60035 - Pages: 241

Free Essay

Big Data

...A New Era for Big Data COMP 440 1/12/13 Big Data Big Data is a type of new era that will help the competition of companies to capture and analyze huge volumes of data. Big data can come in many forms. For example, the data can be transactions for online stores. Online buying has been a big hit over the last few years, and people have begun to find it easier to buy their resources. When the tractions go through, the company is collecting logs of data to help the company increase their marketing production line. These logs help predict buying patterns, age of the buyer, and when to have a product go on sale. According to Martin Courtney, “there are three V;s of big data which are: high volume, high variety, high velocity and high veracity. There are other sites that use big volumes of data as well. Social networking sites such as Facebook, Twitter, and Youtube are among the few. There are many sites that you can share objects to various sources. On Facebook we can post audio, video, and photos to share amongst our friends. To get the best out of these sites, the companies are always doing some type of updating to keep users wanting to use their network to interact with their friends or community. Data is changing all the time. Developers for these companies and other software have to come up with new ways of how to support new hardware to adapt. With all the data in the world, there is a better chance to help make decision making better. More and more information...

Words: 474 - Pages: 2

Free Essay

Big Data

...Big Data Management: Possibilities and Challenges The term big data describes the volumes of data generated by an enterprise, including Web-browsing trails, point-of-sale data, ATM records, and other customer information generated within an organization (Levine, 2013). These data sets can be so large and complex that they become difficult to process using traditional database management tools and data processing applications. Big data creates numerous exciting possibilities for organizations, but along with the possibilities, there are challenges. Managers must understand the pitfalls and limitations, as well as the potential of big data (Levine, 2013). The focus of this report is the business potential and implications of big data as well as understanding the challenges and limitations of big data management. The potentials for big data are numerous; however, in this report only five potentials and implications for use are discussed. These include the following: knowledge management, social media, in travel, banking, and marketing and advertising. Knowledge Management One of the greatest potential for big data is knowledge management. A goal of knowledge management is the ability to integrate information from multiple perspectives to provide the insights required for valid decision-making such as where to invest marketing dollars, how much to invest, or whether to expand into a new geographic market (Lamont, 2012). In terms of knowledge management, three......

Words: 1175 - Pages: 5

Premium Essay

Big Data

...BUS211f(2) ANALYZING BIG DATA I1 Spring 2014—MW 8:00–9:20 am Location: Sachar 116 (International Hall) Prof. Bharatendra Rai 313-282-8309 (mobile) brai@brandeis.edu Office: Sachar 1C Hours: MW, 9:30 – 10:15 and by appointment TA: TBD This is a two credit module that examines the opportunities and industry disruption in an era of massive, high velocity, unstructured data and new developments in data analytic. We treat some strategic, ethical, and technical dimensions of big data. The technical foci of the course include data structures, data warehousing, Structured Query Language (SQL), and high-impact visual displays. The principal objective of the course is to help students build understanding of data as an essential competitive resource, and acquire advanced computer skills through cases and hands-on applications. Assignments and classroom time will be devoted to both to analysis of current developments in analytics and to gaining experience with current tools.  Davenport , Thomas H. and Harris, Jeanne G. Competing on Analytics: The New Science of Winning. Cambridge: Harvard Business School Press, 2007. ISBN 978-1422103326. Available for purchase at the bookstore.  There is a required on-line course pack available for purchase at the Harvard Business Publishing website at this URL: http://cb.hbsp.harvard.edu/cbmp/access/23455671 This link is also available on LATTE . See last page of Syllabus for course pack contents.  Other readings as posted on LATTE site. Learning......

Words: 2130 - Pages: 9

Premium Essay

Big Data

...examine the definition of big data. It also seeks to examine the components of a Unified Data Architecture and its ability to facilitate the analysis of big data. 2 WHAT IS BIG DATA Cuzzocrea, Song and Davis (2011) defined big data in part as being “enormous amounts of unstructured data produced by high-performance applications falling in a wide and heterogeneous family of application scenarios”. In recent years there has been an increasing interest and focus on big data. Many and varied definitions have been proposed but without a consensus on a single definition. The MIT Technology Review (2014), brought attention to the work of Ward and Barker (2014) which examined a number of definitions of big data that have attracted some general ICT industry support from leading ICT industry analysts and organisations such as Gartner, Oracle and Microsoft. In their work they proposed to provide a “concise definition of an otherwise ambiguous term”. The author having just attended a digital government conference with a large proportion of big data tagged presentations also noted that no single definition was offered. There was however a common content theme that supported the Ward and Barker definition of: “Big data is a term describing the storage and analysis of large and or complex data sets using a series of techniques including, but not limited to: NoSQL, MapReduce and machine learning.” 3 UNIFIED DATA ARCHITECTURE 3.1 WHAT IS THE UNIFIED DATA ARCHITECTURE? The......

Words: 579 - Pages: 3

Premium Essay

Big Data

...The Big Data Challenges By Jamia Yant April 19th, 2012 Introduction When Volvo separated from Ford in 2010, it was breaking free from an IT infrastructure that consisted of a tangle of different systems and licenses. The need was there to develop a new stand alone IT infrastructure that could provide better Business Intelligence, boost communication capabilities and enrich collaborations. Volvo Car Corporation Integrates the Cloud into Its Networks The ability to collectively harness the wealth of data being mined was invaluable. Volvo collects terabytes of data from embedded sensors in their cars, from their customer relationship management (CRM) systems, from dealerships, product development and design systems and from their production/factory floors. Volvo then, via the cloud, transfers and archives this Big Data to its Volvo Data Warehouse where it can be stored for Long Term Archival and Retrieval or it can be accessed by Volvo’s employees. In 2010, Volvo stretched across eight main business units and twelve support areas with production plants in 19 countries. The platform used to link employees at the business units, support and production plants together are done via Volvo’s cloud with Saas software as a user interface and display. They have employee web portals, as well as supplier and vendor web portals to improve collaboration. Volvo has a high-performance infrastructure that includes parallel multi-processing, high-speed networking,......

Words: 945 - Pages: 4

Premium Essay

Big Data

...Introduction to Big data Every day, 2.5 quintillion bytes of complex, every changing data are generated. (IBM) Data comes from social sites, digital images, transaction records, and countless unknown resources. The amount of data we generate daily is enormous, and the rate it is being generated is accelerating. As we head into a future where technology dominates the global market, this pace will only continue accelerate. Businesses and other entities are aware of this data and its power. In a survey taken by Capgemini and the Economist, over 600 global business leaders identified their companies as data driven and identified data analytics as an integral part of their business. Big Data solutions are considered the answer for handling this data converting it into useful information. According to the O'Reilly Radar Team (Big Data Now), Big Data consists of three variables – size, velocity and variety. Data is considered big if conventional systems cannot handle its size. It is not only that size of Big Data that matters, but also the volume of transactions that come with it. The second issue is how fast the data is generated and how fast if it changes (velocity). New data and updated data is constantly generated, and it must be processed and analyzed quickly to create real value for an organization. The final issue is data structure (variety). Data is typically collected in raw form, unstructured, from a variety of sources. To acquire useful information, data needs to be......

Words: 2909 - Pages: 12

Premium Essay

Big Data

...era of ‘big data’? Brad Brown, Michael Chui, and James Manyika Radical customization, constant experimentation, and novel business models will be new hallmarks of competition as companies capture and analyze huge volumes of data. Here’s what you should know. The top marketing executive at a sizable US retailer recently found herself perplexed by the sales reports she was getting. A major competitor was steadily gaining market share across a range of profitable segments. Despite a counterpunch that combined online promotions with merchandizing improvements, her company kept losing ground. When the executive convened a group of senior leaders to dig into the competitor’s practices, they found that the challenge ran deeper than they had imagined. The competitor had made massive investments in its ability to collect, integrate, and analyze data from each store and every sales unit and had used this ability to run myriad real-world experiments. At the same time, it had linked this information to suppliers’ databases, making it possible to adjust prices in real time, to reorder hot-selling items automatically, and to shift items from store to store easily. By constantly testing, bundling, synthesizing, and making information instantly available across the organization— from the store floor to the CFO’s office—the rival company had become a different, far nimbler type of business. What this executive team had witnessed first hand was the gamechanging effects of big data. Of......

Words: 3952 - Pages: 16

Free Essay

Big Data

...Lecture on Big Data Guest Speaker Simon Trang Research Member at DFG RTG 1703 and Chair of Information Management Göttingen University, Germany 2014 The City City of Göttingen • Founded in the Middle Ages • True geographical center of Germany • 130,000 residents Chair of Information Management Lecture on Big Data at Macquarie University 2 2 The University Georg-August-Universität Göttingen (founded in 1737) • • • • One of nine Excellence Universities in Germany 13 faculties, 180 institutes 26,300 students (2013) 11.6% students from abroad (new entrants: approximately 20%) • 13,000 employees (including hospital and medical school), including 420 professors • 115 programs of study from A as in Agricultural Science to Z as in Zoology are offered (73 bachelor / 22 master programs) Chair of Information Management Lecture on Big Data at Macquarie University 3 “The Göttingen Nobel Prize Wonder” Over 40 Nobel prize winners have lived, studied, and/or lived, studied or/and researched 41 Prize researched at the University of Göttingen, among them… at the University of Göttingen, among them… • • • • • • • • • • • • • • Max von Laue, Physics, 1914 Max von Laue, physics, 1914 Max Planck, physics, 1918 Max Planck, Physics, 1918 Werner Heisenberg, physics, 1932 Werner Heisenberg, Physics, 1932 Otto Hahn, chemistry 1944 Otto Hahn, Chemistry 1944 Max Born, physics, 1954 Max Born, Physics, 1954 Manfred Eigen, chemistry, 1967 Manfred Eigen, Chemistry, 1967......

Words: 1847 - Pages: 8

Premium Essay

Big Data

...2014 SUBJECT: “Big Data” Introduction The purpose of this report is to present the technology issue of big data. In this memo we shall discuss what exactly big data is, how it applys to the accounting field, why it’s an issue for concern, and our recommendations as to how best to respond to the issue. What is Big Data? A truly succinct definition of big data, encompassing the entirety of the issue and everyone can agree on is something that right now doesn’t exist. Many people have many slightly different ways of describing just what big data is, so in order to get an accurate idea of the entire scope of what this term means you’d need look to multiple sources to get the whole picture. According to information gathered from the SAS Institute and Forbes magazine, the following definition is formed: * Big Data is a collection of both unstructured and structured data gathered from traditional, non-traditional, digital, numerical, and many other such sources inside and outside the company. All of this data forthcoming from the sources listed represent a source for ongoing discovery and analysis. (Arthur, L. 2013 and SAS Institute, Inc. n.d.) Big data comes in many different forms including video, audio, or simple text. In fact, even social media content like someone’s tweets on Twitter are included under the banner of big data. To further define the broad discussion of what big data is, many industry experts have looked to the “Three Vs of big data”:......

Words: 1416 - Pages: 6

Premium Essay

Big Data

...Summary This article “Big Data” is talking about how data could improve the company’s performance and competition. That means we should measure the business with more data at first, and then involve the knowledge to improve our decision, which I do agree with. Big data gives traditional businesses chances to transform their old way to newer, profitable way, like on-line business, which is much more powerful than the past. And analyzing of big data will also change long-standing ideas about the value of experience, the nature of expertise, and the practice of management, which is very meaningful for developing business in the future---the high technology world. What’s more, the article points out that there are three major differences between analytics and Big data. Firstly, Big data’s volume is much bigger than analytics, which will give companies information about the decision. Secondly, the speed of spreading of data in these days is a more important criterion for a company to be much more agile than its competitors. Finally, Big data could spread through a lot different ways, such as social networks, sensors, and even cell phones. All the data in each way will update every second to everywhere. In sum, these differences make big data become a huge, fast and variety resource for business, even everyday life. So we should take big data seriously. This article also figures out that the importance to be a data-driven company, which will perform better on objective measures......

Words: 911 - Pages: 4

Free Essay

Big Data

...The Situation of Big Data Technology Yu Liu International American University BUS 530: Management Information Systems Matthew Keogh 2015 Summer 2 - Section C Introduction In this paper, I will list the main technologies related to big data. According to the life cycle of the data processing, big data technology can be divided into data collection and pre-processing, data storage and management, data analysis and data mining, data visualization and data privacy and security, and so on. The reason I select topic about big data My major is computer science and I have taken a few courses about data mining before. Nowadays more and more job positions about big data are showing at job seeking website, such as Monster.com. I am planning to learn some mainstream big data technologies like Hadoop. Therefore, I choose big data as my midterm paper topic. Big data in Google Google's big data analytics intelligence applications include customer sentiment analysis, risk analysis, product recommendations, message routing, customer losing prediction, the classification of the legal copy, email content filtering, political tendency forecast, species identification and other aspects. It is said that big data will generate $23 million every day for Google. Some typical applications are as follows: Based on MapReduce, Google's traditional applications include data storage, data analysis, log analysis, search quality and other data analytical applications. Based on Dremel......

Words: 1405 - Pages: 6

Premium Essay

Big Data

...Big Data Big Data and Business Strategy Businesses have come a long way in the way that information is being given to management, from comparing quarter sales all the way down to view how customers interact with the business. With so many new technology’s and new systems emerging, it has now become faster and easier to get any type of information, instead of using, for example, your sales processing system that might not get all the information that a manger might need. This is where big data comes into place with how it interacts with businesses. We can begin with how to explain what big data is and how it is used. Big data is a term used to describe the exponential growth and availability of data for both unstructured and structured systems. Back in 2001, Doug Laney (Gartner) gave a definition that ties in more closely on how big data is managed with a business strategy, which is given as velocity, volume, and variety. Velocity which is explained as how dig data is constantly and rapidly changing within time and how fast companies are able to keep up with in a real time manner. Which sometimes is a challenge to most companies. Volume is increasing also at a high level, especially with the amount of unstructured data streaming from social media such as Facebook. Also including the amount of data being collected from customer information. The final one is variety, which is what some companies also struggle with in handling many varieties of structured and unstructured......

Words: 1883 - Pages: 8

Premium Essay

Big Data

...No one can deny the how important is big data to our business world right now .Big data is transforming the way individuals within organizations work together. It is turning to be a cultural mindset not just a technology tool in which business and IT decision makers must join forces to realize the maximum value from all data. Outcomes and Insights from big data can enable all corporation individuals to make better decisions—through real time data analytics which is deepening customer engagement by adding value to the end customers, optimizing technical and untechnical operations, preventing possible threats and fraud, and mounting on new sources of revenue. Under the umbrella of the world’s globalization and glocalization, escalating demand requires a fundamentally new high quality data handling analytics approaches to architecture, tools and practices. Through some job experience I encountered recently in the field of supply chain management, I found out that big data is the backbone for a successful SCM network. Big data is optimizing Supply Chain networks with greater data accuracy, clarity, and insights, leading to more visionary contextual intelligence shared across supply chains. It’s a fact that manufacturers now a days have to orchestrate 80% or more of their supplier network activity outside their slio functional walls, using big data and cloud-based technologies to get beyond the constraints of old legacy Enterprise Resource Planning (ERP) and Supply Chain......

Words: 385 - Pages: 2

Premium Essay

Big Data

...have since devised better marketing and planning strategies by utilizing Big Data facilities and technologies whereby businesses are capable of deriving user requirements based on the searches potential users conduct on their mobile devices. From our initial report, we were able to highlight how Big Data is utilized in an organization and the accrued advantages against disadvantages of implementing Big Data technologies. We shall begin this report by first responding to the issues raised by management and then continue to make recommendations on the utilization of Big Data. Addressing Feedback Big Data technologies are fairly new to this organization and thus management was bound to raise issues concerning implementation and feasibility of the project. In this section, we shall briefly highlight these issues and how they may be addressed to achieve the organization’s objectives cost effectively. These issues include; i. Cost of implementing Big Data technologies – Big Data technologies require advanced hardware and software systems which are capable of handling large volumes of data and generating relevant information to aid the business in effectively conducting business operations. ii. Necessary personnel – Big Data technologies require skillful personnel in distributed database technologies as a measure of ensuring data availability thus the need for technical expertise. iii. How Big Data increases competitiveness – Businesses are always looking forward on......

Words: 1262 - Pages: 6

Dan Sheedy | Watch movie | Cookie Clicker