Scroll Top

An introduction to Web Dynpro – Part 1

Can in-memory computing answer BIG DATA?

TAKE NOTE(Insights into the SAP solution and technology)

Like it or not, the amount of information a company will deal with only ever increases. This has led to the concept of “Big Data”, whereby companies will increasingly rely on large amounts of information from a variety of sources to analyze, improve and execute their operations.

There are several reasons given for this. First, simple availability: the use of more technology in business is resulting in more and more data. Second, regulation: companies must retain more and more information to prove compliance. Finally, there is an increasing recognition that companies must use every single resource at their disposal. As a result, data that once might have seemed irrelevant is now pored over for any perceived value.

Uses for Big Data

The big question for Big Data is what to do with it. Most companies will naturally want to carry out in-depth analysis of the data within their ERP systems, digging deep to analyze and predict the most effective way to do business and determine future strategies and tactics. However, as data volumes increase, some companies hit a stumbling block: how do they process such a huge amount of information in a timely manner?

One option is to only study a proportion of the whole mass, yet this can easily provide inaccurate results as companies are basing their decisions on an incomplete view. With enough computing power, this obstacle is removed as companies then have the performance for high-speed analysis of entire masses of data at once.

In-memory computing tools, such as SAP’s HANA In-Memory appliance, are designed to produce this power so that companies can analyze vast quantities of business data as and when it is received and needed, from a variety of data sources.

The concept behind in-memory computing is relatively simple. Traditionally, data will be placed in persistent storage then, when needed, will be accessed and acted upon in the computer’s memory. This results in a natural bottleneck that reduces speed – even with the fastest SSD hard drives, there will still be a gap where data must be accessed, transferred to memory, and then returned so the next batch of data can be used. As volumes of data increase, so the time needed simply for access, let alone actual analysis, increases too.

In-memory computing takes advantage of a better understanding of how data is shaped and stored, the constantly falling price of memory and the related greater affordability of faster solid state memory to do away with the traditional concept of persistent storage. Instead, data is stored directly in the computer’s memory. As a result, when it needs to be analyzed it is already available and can be accessed near-instantaneously.

The most evident benefit of in-memory processing is its speed. Without the bottleneck of having to access data in storage, companies can swiftly analyze information and use it to create the best possible strategies.

This speed is vital; rather than analyzing information that is days or weeks out of date, companies can perform complex queries in minutes, meaning their business operations can be investigated and improved based on the situation as it is rather than the situation as it was last week. At the same time, in-memory computing’s power means that companies can scan entire sets of data rather than representative samples, meaning they can be sure they are acting on all of the facts.

This power and speed provides other benefits. Rather than trying to streamline analysis speeds by presenting data in a rigid format that only responds to certain pre-ordained queries, companies can instead save data in a more unstructured format. By relying on the power of in-memory computing to compensate for this lack of structure, companies also have far more flexibility in how they access the information.

For example, if a company using in-memory tools suddenly decides to study its HR processes based on new customer feedback data, it does not need to restructure the data on file to accommodate a planned selection of new queries. It simply asks the questions as and when they appear. It is these benefits that mean in-memory is already used by many companies, for purposes from maximizing sales to analyzing gene sequences or for Government Entities identifying waste and abuse.

Taking the plunge

To an extent, the decision to adopt in-memory computing is less one of “whether” and more one of “when”. If a company is large enough and collects enough information, the inevitability of Big Data means that it will have to adopt in-memory computing at some point so it can continue to function.

For certain sectors where huge amounts of data are practically a requirement, such as utilities or finance, in-memory computing is already a hugely disruptive technology. Companies in these sectors would do well to make the move to in-memory computing early, rather than being left trying to catch up with the competition. For others, the choice is less clear-cut. A company with relatively little data may feel the costs of an in-memory implementation far outweigh the benefits.

What is clear is that the move to in-memory computing, while it might be inevitable, will not necessarily be straightforward. Companies should take advantage of all sources of information at their disposal, from suppliers to user groups, to help them make their decision. Whether the best decision is to implement in-memory now, in the future, in-house, via the cloud, or simply not at all, companies need to be sure they have made a well-informed choice. This choice also needs to cover the most important factor of all – as powerful as in-memory computing is, like all technology it is worse than useless if it is not used to the correct end.

 


 

UNDER DEVELOPMENT(Information for ABAP Developers)

An introduction to Web Dynpro – Part 1

The Origins of Web Dynpro

SAP’s newest user interface (UI) development option for the SAP NetWeaver platform — has been designed to become the de facto option of choice for SAP development. Web Dynpro was created because, like every other software vendor in the Web space, SAP needed a longterm, strategic solution for the many problems faced by Web developers during the implementation of browser-based business applications.

The seeds of Web Dynpro were bouncing around within SAP as early as the beginning of 2001. During the brainstorming phase, SAP had to consider a wide range of design criteria and combine them into a single toolset that was capable of meeting all its needs.

READ MORE

 


 

Q&A(Your Questions answered)

Q. What is the difference between SE38 and SA38?

A. Very simply, the main difference between SE38 and SA38 is that the former (SE38 is used for development activities including coding, compiling and execution of the program objects. While SA38 is mainly for execution of programs. Its is a way to let users execute report objects without the ability to edit or change them. There are better ways to address this requirement.

If you have a technical question you’d like answered, post the question to our Facebook pagewww.facebook.com/itpsapinc

Pin It on Pinterest

Share This

If you enjoyed this post, why not share it with your friends!