back to article SAP unveils its biggest thing for 20 YEARS: Core suite with HANA

SAP has rolled out a new version of its flagship business suite – and it's married to in-memory database Hana. The software giant announced SAP S/4HANA on Tuesday, with CEO Bill McDermott billing the fourth generation of its suite as the biggest launch in two decades. The core, on-premise Business Suite for running finance, …

  1. SecretSonOfHG

    A little late? And what about existing customers?

    SAP customers are among the the most locked in the history of computing. Anything SAP does can do it a little more than expected and nobody will really complain. Because, well, there is no other alternative than SAP.

    An missing part of the story is how customers are going to move from the current version to the new one, if at all. Because the thing as described is radically different from its predecessor at all levels. New hardware for HANA means existing one can't be repurposed, new database model means the existing data in relational databases has to somehow move to one data model that has way less places to store all that. All custom coding has also to be reworked. Not to mention training, certification and such.

    I can feel a disturbance in the force, as if thousands of consultans were starting to salivate over the prospect of a migration. I predict another disturbance in the force when existing SAP customers realize that they can't move to the new version without spending about the same as what they had to pay in the first place.

  2. Anonymous Coward
    Anonymous Coward

    SAP is fucking shit at everything.

    1. Wedgie

      Putting it into perspective it's not like the competition are any better when comparing like-for-like.

  3. James Anderson

    little big data

    I don't see how an in memory database like hana can qualify as big data when its restricted to the RAM in a single machine.

    Gotta love an accounts package that's so inneficient it needs a custom in memory dB just to implement standard business practice.

    1. big_D Silver badge

      Re: little big data

      With memory compression techniques (I used to work on an OLAP system that compressed was about 2GB in size, I forgot the compress flag one day and the database ran out of disk space (100GB), before it had finished calculating the data.

      So, how many GB of compressed data do you need for "big data"? For larger customers, we are talking servers on the expensive side of 512GB RAM and clusters of servers... HANA can be expected to use between 256GB and 2TB of RAM for medium to large organisations.

      Your licence costs are going to be the least of your financial worries!

      1. James Anderson

        Re: little big data

        Big Data as used by marketeers of say Hadoop usually refers to some number minimum number of Terabytes, You would not be considered a serious big data geek if your site had less than a Petabyte.

        So "several GB" does not even begin to hack it.

        Next they will be claiming these power guzzling (even when idle) machines have "green credentials

    2. PhilipJ

      Re: little big data

      > "Gotta love an accounts package..."

      you sure have no idea what SAP actually is

    3. MadMike

      Oracle SPARC servers much larger and faster than HANA clusters

      HANA is clustered ram database. And clusters can never compete with a single SMP server, the latency as nodes communicates to each other in a cluster is too bad. So the best performance will be within a single node. And the HANA nodes are tiny and have small amounts of RAMS. Also, the cluster is not too big either, there is a limit how many nodes can be added.

      Contrast this HANA cluster with (In total 10-20TB(?) RAM with bad latency) to the Oracle sparc m7 server release this year: 64 TB ram in a single SMP server. It is not cluster, so it will be wicked fast with low latency. Apply RAM compression and you can run very very very large databases from memory. Much faster than HANA clusters. Also, consider that one sparc m7 CPU can do 120GB/sec SQL queries, and you have 32 of these CPUs at your disposal. One x86 CPU does... 5GB/sec queries(?).

      The conclusion is clear: the fastest database servers are from Oracle. So if you need the fastest SAP analytics you need a 64 TB single SMP database server.

  4. Fenton

    HANA is not restricted to the memory footprint of a single machine, and can scale out, by adding new blades.

    You can also relegate less frequently used data to near line storage if you want.

    There are different routes to HANA even with your existing ERP system.

    a) A side car where you move specific objects to HANA (your backend still being a traditional DB), where the SAP kernel reads from HANA and writes to both DBs.

    b) A normal DB migration (SAP has had those tools for years

    c) Re-implementation specific functions (e.g. Finance) on a new System (the S4 HANA platform)

    Once your "old" ERP system is on HANA, you can either implement new functionality on the new platform, but you can also implement it on the same platform and move it later.

    Every new platform/technology is going to be disruptive at some point in time. But your traditional ERP system will still be supported until 2025 so plenty of time to migrate.

    1. SecretSonOfHG

      "Only" 10 years?

      I upvoted your post as I found it quite a short and informative summary of the options. Wish the SAP marketing and consultants were so clear.

      However, I think we need to be realistic in the migration costs. Most business are concerned about next quarter or next year performance: Planning and budgeting for longer than the next two years is not going to happen, much less execution.

      Well, the planning part may happen on paper, but only to be changed the following year. With that in mind, I expect to see HANA only in new SAP installations or as a desperate attempt to get out of some performance corner on existing installations.

  5. Fenton

    @SecretSonOfHG

    It's always hard to undertake such a project, But you have to look at your competition.

    Are they able to close a period in a matter in minutes, are they able to do real time replenishment/forecasting.

    Most adoption at the moment (I work for a SAP hosting/consultancy) of HANA at the moment is around reporting, but companies are really thinking about adoption for the back-end, Memory prices will come down (although SAP do need to rethink their license model)

    But all new SAP functionality and real time business process will be on HANA only.

    Also a lot of customers want to move away from Oracle as the back-end at the moment the migration cost will be the same regardless of target DB.

  6. Fenton

    @MadMike

    FUD

    SAP is about to release a version of HANA on IBM Power so P795 with 16TB and above

    They also run on very large single node x86 systems (e.g. SGI with 64TB of Ram).

    It's also about cost/performance benefit. For the performance a customer requires, two fully loaded blades may well be totally adequate, rather than going for a costly single node or propriety and costly oracle solution.

    Being x86 based you can easily move to cloud based solutions, changing providers with ease (HANA also work under VMware so very portable)

    Also you have the problem that the New SAP applications will be written for HANA only with some SQL middleware sat in the middle parsing SQL statement to then read from memory.

    This is not about making an existing Application run faster (with all their bad sql statements, duplicated tables, etc), this is about designing applications from the ground up with new in Memory data models in mind.

    It is also about abstraction. Rather than loads of code with duplicate SQL statements, it designed to be based on objects, e.g. purchase orders, finance documents, making development time quicker.

    HANA is not just a database, but contains analysis libraries, geospacial functionality, planning/graphing, search/text analysis/ Landscape transformation and Data Services, all built in.

    All objects/APIs that can be called without having to directly access a table.

    1. seven of five

      > SAP is about to release a version of HANA on IBM Power so P795 with 16TB and above

      Do you have a timeframe for this? Or other non-NDAed info? Our basis guys are running around beating that HANA drum all day and I would really prefer not to be responsible for some rackloads of redhat stuff doing the work of my 9117s. But it looks like they will start to make facts rather soon and then I´m in for the pound...

      pretty, pretty please?

      1. Fenton

        @seven_of_five

        Not seen the time frame yet, but it will happen soon. It was demo'd at SAP teched

    2. MadMike

      SGI is a cluster

      @fenton

      Where is the fud? Please point it out.

      First of all, IBM p795 which has only 16tb of ram, is tiny compared to the 64tb ram sparc m7 server. Also, one sparc m7 CPU does 120gb/sec SQL queries. Couple that with 32 sockets and compressed ram database and you will realize it will crush IBM p795 in analytics and other database work. The old power7 CPU, does it do 3-5gb/sec SQL queries? Or less?

      Besides the IBM b795 is old and previous generation. The new power8 server, the E880, only has 16 sockets and maximum of 16tb ram. The performance is roughly equal to the old IBM p795 (32 sockets, 16 tb ram). So E880 will be no match for sparc m7 server.

      Second, the SGI altix and uv2000 servers, with 10.000s of cores and 64 tb ram, are clusters. Sure it runs single image Linux kernel, but it is only used for clustered work loads. Look at the use cases, all SGI customers are running clustered HPC number crunching. No one use those servers for SMP enterprise business workloads. No one runs SAP on them. But HANA might be suitable, because hana is clustered.

      Besides, SGI themselves explicitly says in articles that their servers are only for clustered HPC work loads, and are not suitable for SMP business workloads. You want to read those links? Where are the SMP business benchmarks? They don't exist, because no one use SGI for SMP work loads. Show us sap benchmarks.

  7. optic

    Hana is good fun and a damn site quicker than our standard Oracle setups. But when do i get my recursive CTE statements or connect by prior equivalents. This is really hurting me. Or am i going to have to write a function . . . if so then why can this not be included as standard.

    Also, half the time their CE functions is slower than raw SQL for complex things. *shrugs* Still nice and quick usually.

    1. MadMike

      Of course ram based Hana is "damn quicker" than oracles disk based solutions. But have you tried oracles ram databases? Oracle have the largest ram servers on the market. Today they have 32tb ram sparc m6 servers. The competition have half of that, maximum

  8. Fenton

    The FUD is that HANA can only work with clustered memory systems. Not true, it will scale to the size of the largest box given to it.

    All well and good scaling up to 64TB. Do you know of any SAP customers who currently have ERP systems that are currently 640TB is size (assume 10:1 compression ratio).

    Even the largest systems hover around the 20 - 30 TB size, so will compress down nicely to 2/3TB.

    So will work with a

    The M series may be the Maclaren of in memory systems, but 99% of customers only need a fast bus, or maybe even a fleet of buses, but they will still come in cheaper than a propriety closed architecture, with oracles lock-in and appalling customer service.

    Also if you want to run the SAP Analytics on your oracle box, you need layers and layers of software to extract and transform the data, into the exalytics engine adding more and more complexity.

    1. MadMike

      Nobody says that Hana only runs on clusters. What I AM saying though, is that Hana clusters runs on x86. And the largest x86 server is tiny, it has only 12tb ram. Almost like the IBM p795 which only has 16tb ram. Or the newest IBM Power8 server called E880, which also has 16tb ram. This means your largest data set in a single x86 node is only 12tb. When you use two or more nodes the performance degrades significantly. Sure, it will still be faster than hard disks, but can't compare to a single 64tb SMP server, which oracle release this year (sparc M7 server will have 32 sockets, 64 tb ram, 8,192 threads, 1,024 cores, and each CPU will do 120gb/sec SQL queries)

      So I repeat what I have said all the time: the fastest ram database solution is from oracle, it is not a Hana cluster. The sparc M7 server will crush everything. Imagine running huge databases from ram, lightning fast. Regarding the current sparc M6 which has 32 tb ram, Merrill Lynch liked it, because they had queries which never finished, but on the M6 the queries finished in a couple of hours (it is the largest ram server on the market, so it can hold more data in ram than anyone else, boosting capabilities beyond everyone else). The sparc M7 CPU is 4x faster than the M6 cpu, and does SQL queries much much much faster. And holds twice the ram. I'm telling you, it will crush.

      Now, imagine a Hana cluster consisting of several 64tb ram servers from oracle. That should give the highest possible performance. Ever.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like