cluster

Results 1 - 25 of 167Sort Results By: Published Date | Title | Company Name
By: Dell EMC     Published Date: Apr 18, 2017
In this report we’ll explore comparing the cost of an on-site hyperconverged solution with a comparable set up in the cloud. The on-site infrastructure is a Dell EMC VxRailTM hyperconverged appliance cluster and the cloud solution is Amazon Web Services (AWS).
Tags : cost, cloud, hyperconverged, hyperconverged infrastructure, aws, amazon web services, dell emc
     Dell EMC
By: Scalebase     Published Date: Mar 08, 2013
Technology analyst firm 451 Research offers a brief overview of ScaleBase’s Data Traffic Manager software for dramatic scaling of MySQL databases beyond the capabilities of MySQL 5.6.
Tags : shard, cluster, high availability, failover, mariadb, mysql, read/write, scalability, capacity planning, scalebase, 451 group overview, research, it management, data management, business technology, data center
     Scalebase
By: Dell EMC     Published Date: Oct 08, 2015
Download this white paper to learn how the company deployed a Dell and Hadoop cluster based on Dell and Intel® technologies to support a new big data insight solution that gives clients a unified view of customer data.
Tags : 
     Dell EMC
By: Symantec.cloud     Published Date: Sep 07, 2010
This white paper looks at the value of email availability and how it can be improved.
Tags : messagelabs hosted services, email continuity, disaster recovery, back up, clustering
     Symantec.cloud
By: Electric Cloud     Published Date: Nov 04, 2009
ElectricAccelerator improves the software development process by reducing build times, so development teams can reduce costs, shorten time-to-market, and improve quality and customer satisfaction.
Tags : electric cloud, electricaccelerator, software development, dependency management, software build accelerator, visualization, cluster manager, open source, roi
     Electric Cloud
By: Equinix     Published Date: Oct 27, 2014
Connections are great. Having a network to connect to is even better. Humans have been connecting, in one form or another, throughout history. Our cities were born from the drive to move closer to each other so that we might connect. And while the need to connect hasn’t changed, the way we do it definitely has. Nowhere is this evolution more apparent than in business. In today’s landscape, business is more virtual, geographically dispersed and mobile than ever, with companies building new data centers and clustering servers in separate locations.
Tags : data center, enterprise, cloud, experience, hybrid, performance, strategy, interconnectivity, network, drive, evolution, landscape, server, mobile, technology, globalization, stem, hyperdigitization, consumer, networking
     Equinix
By: Equinix     Published Date: Mar 26, 2015
Connections are great. Having a network to connect to is even better. Humans have been connecting, in one form or another, throughout history. Our cities were born from the drive to move closer to each other so that we might connect. And while the need to connect hasn’t changed, the way we do it definitely has. Nowhere is this evolution more apparent than in business. In today’s landscape, business is more virtual, geographically dispersed and mobile than ever, with companies building new data centers and clustering servers in separate locations. The challenge is that companies vary hugely in scale, scope and direction. Many are doing things not even imagined two decades ago, yet all of them rely on the ability to connect, manage and distribute large stores of data. The next wave of innovation relies on the ability to do this dynamically.
Tags : data center, interconnectivity, mobile, server clusters, innovation, data storage, storage
     Equinix
By: Oracle     Published Date: Apr 04, 2012
This white paper examines how the versatile design of the Oracle SPARC SuperCluster T4-4 along with powerful, bundled virtualization capabilities makes it an ideal platform for consolidating enterprise servers and workloads and deploying apps.
Tags : supercluster, sparc, t4-4, workloads, oracle exalogic elastic cloud, oracle, elastic capacity, infrastructure, enterprise applications
     Oracle
By: IBM     Published Date: May 30, 2008
WinterCorp analyzes IBM's DB2 Warehouse and how it addresses twin challenges facing enterprises today: improving the value derived from the torrents of information processed every day, while lowering costs at the same time. Discover why WinterCorp believes the advances in data clustering strategies and intelligent software compression algorithms in IBM's Data Warehouse improves performance of business intelligence queries by radically reducing the I/O's needed to resolve them.
Tags : data warehousing, data management, database management, database administration, dba, business intelligence, ibm, leveraging information, li campaign, ibm li
     IBM
By: IBM     Published Date: Jul 05, 2016
This white paper discusses the concept of shared data scale-out clusters, as well as how they deliver continuous availability and why they are important for delivering scalable transaction processing support.
Tags : ibm, always on business, cloud, big data, oltp, ibm db2 purescale, networking, knowledge management, enterprise applications, data management, business technology, data center
     IBM
By: IBM     Published Date: Oct 13, 2016
Compare IBM DB2 pureScale with any other offering being considered for implementing a clustered, scalable database configuration see how they deliver continuous availability and why they are important. Download now!
Tags : data. queries, database operations, transactional databases, clustering, it management, storage, business technology
     IBM
By: AWS     Published Date: Sep 05, 2018
Amazon Redshift Spectrum—a single service that can be used in conjunction with other Amazon services and products, as well as external tools—is revolutionizing the way data is stored and queried, allowing for more complex analyses and better decision making. Spectrum allows users to query very large datasets on S3 without having to load them into Amazon Redshift. This helps address the Scalability Dilemma—with Spectrum, data storage can keep growing on S3 and still be processed. By utilizing its own compute power and memory, Spectrum handles the hard work that would normally be done by Amazon Redshift. With this service, users can now scale to accommodate larger amounts of data than the cluster would have been capable of processing with its own resources.
Tags : 
     AWS
By: Amazon Web Services     Published Date: Sep 05, 2018
Amazon Redshift Spectrum—a single service that can be used in conjunction with other Amazon services and products, as well as external tools—is revolutionizing the way data is stored and queried, allowing for more complex analyses and better decision making. Spectrum allows users to query very large datasets on S3 without having to load them into Amazon Redshift. This helps address the Scalability Dilemma—with Spectrum, data storage can keep growing on S3 and still be processed. By utilizing its own compute power and memory, Spectrum handles the hard work that would normally be done by Amazon Redshift. With this service, users can now scale to accommodate larger amounts of data than the cluster would have been capable of processing with its own resources.
Tags : 
     Amazon Web Services
By: Cloudant, an IBM Company     Published Date: May 15, 2014
Learn how a Cloudant account can be hosted within a multi-tenant Cloudant cluster, or on a single-tenant cluster running on dedicated hardware hosted within a top-tier cloud provider like Rackspace or IBM SoftLayer.
Tags : cloudant, cloud, cloud computing, data layer, dbms, dsaas, saas, data replication, data delivery
     Cloudant, an IBM Company
By: WANdisco     Published Date: Oct 15, 2014
In this Gigaom Research webinar, the panel will discuss how the multi-cluster approach can be implemented in real systems, and whether and how it can be made to work. The panel will also talk about best practices for implementing the approach in organizations.
Tags : wandisco, wan, wide area network, hadoop, clusters, clustering, load balancing, data, big data, data storage, storage
     WANdisco
By: Avi Networks     Published Date: Mar 06, 2019
OpenShift-Kubernetes offers an excellent automated application deployment framework for container-based workloads. Services such as traffic management (load balancing within a cluster and across clusters/regions), service discovery, monitoring/analytics, and security are a critical component of an application deployment framework. Enterprises require a scalable, battle-tested, and robust services fabric to deploy business-critical workloads in production environments. This whitepaper provides an overview of the requirements for such application services and explains how Avi Networks provides a proven services fabric to deploy container based workloads in production environments using OpenShift- Kubernetes clusters.
Tags : 
     Avi Networks
By: BlackBerry Cylance     Published Date: Jul 02, 2018
The information security world is rich with information. From reviewing logs to analyzing malware, information is everywhere and in vast quantities, more than the workforce can cover. Artificial intelligence (AI) is a field of study that is adept at applying intelligence to vast amounts of data and deriving meaningful results. In this book, we will cover machine learning techniques in practical situations to improve your ability to thrive in a data driven world. With clustering, we will explore grouping items and identifying anomalies. With classification, we’ll cover how to train a model to distinguish between classes of inputs. In probability, we’ll answer the question “What are the odds?” and make use of the results. With deep learning, we’ll dive into the powerful biology inspired realms of AI that power some of the most effective methods in machine learning today. Learn more about AI in this eBook.
Tags : artificial, intelligence, enterprise
     BlackBerry Cylance
By: VMTurbo     Published Date: Mar 25, 2015
An Intelligent Roadmap for Capacity Planning Many organizations apply overly simplistic principles to determine requirements for compute capacity in their virtualized data centers. These principles are based on a resource allocation model which takes the total amount of memory and CPU allocated to all virtual machines in a compute cluster, and assumes a defined level of over provisioning (e.g. 2:1, 4:1, 8:1, 12:1) in order to calculate the requirement for physical resources. Often managed in spreadsheets or simple databases, and augmented by simple alert-based monitoring tools, the resource allocation model does not account for actual resource consumption driven by each application workload running in the operational environment, and inherently corrodes the level of efficiency that can be driven from the underlying infrastructure.
Tags : capacity planning, vmturbo, resource allocation model, cpu, cloud era, it management, knowledge management, enterprise applications
     VMTurbo
By: CA WA     Published Date: May 12, 2008
Emerging information technologies like composite and multi-tier applications, service oriented architectures (SOA), virtualization, grid and cluster computing, and Web Service-based application delivery create an extraordinary opportunity for IT managers. With these dynamic technologies, they can provide business owners with IT services that are extremely flexible and agile, driving a more dynamic and competitive business.
Tags : ca wa, workload automation, roi
     CA WA
By: IBM     Published Date: Feb 25, 2008
IBM HACMP supports a wide variety of configurations, and provides the cluster administrator with a great deal of flexibility. With this flexibility comes the responsibility to make wise choices. This paper discusses the choices that the cluster designer can make, and about the alternatives that make for the highest level of availability.
Tags : high availability, backup, recovery, utility computing, network management, ibm
     IBM
By: Adaptive Computing     Published Date: Feb 06, 2014
A new approach, known as “Big Workflow,” is being created by Adaptive Computing to address the needs of these applications. It is designed to unify public clouds, private clouds, Map Reduce-type clusters, and technical computing clusters. Download now to learn more.
Tags : adaptive, ada[tive computing, big data, data center, workload management, servers, cloud, cloud computing, storage, data storage, business technology
     Adaptive Computing
By: SAS     Published Date: Oct 18, 2017
Want to get even more value from your Hadoop implementation? Hadoop is an open-source software framework for running applications on large clusters of commodity hardware. As a result, it delivers fast processing and the ability to handle virtually limitless concurrent tasks and jobs, making it a remarkably low-cost complement to a traditional enterprise data infrastructure. This white paper presents the SAS portfolio of solutions that enable you to bring the full power of business analytics to Hadoop. These solutions span the entire analytic life cycle – from data management to data exploration, model development and deployment.
Tags : 
     SAS
By: CDW     Published Date: Apr 04, 2016
Cloud computing is increasingly being adopted as a way for IT organizations to decrease costs, improve efficiency, and enhance business agility. NetApp has been helping companies succeed in cloud deployment to achieve tangible results since long before the term cloud entered the popular lexicon. NetApp has the people, technologies, and partnerships in place to help organizations evolve existing IT infrastructure into an efficient cloud-based service delivery model. The NetApp® Unified Storage Architecture and clustered Data ONTAP® operating system integrate all storage capabilities into a single, easy-to-use platform. By choosing NetApp when making the transition to a private cloud or a hybrid cloud, organizations will be able to meet their storage business needs now and in the future.
Tags : cloud management, cloud services, cloud management, it infrastructure, cloud application, cloud computing, infrastructure, technology, storage management, data management
     CDW
By: CDW - NetApp     Published Date: Apr 07, 2016
Cloud computing is increasingly being adopted as a way for IT organizations to decrease costs, improve efficiency, and enhance business agility. NetApp has been helping companies succeed in cloud deployment to achieve tangible results since long before the term cloud entered the popular lexicon. NetApp has the people, technologies, and partnerships in place to help organizations evolve existing IT infrastructure into an efficient cloud-based service delivery model. The NetApp® Unified Storage Architecture and clustered Data ONTAP® operating system integrate all storage capabilities into a single, easy-to-use platform. By choosing NetApp when making the transition to a private cloud or a hybrid cloud, organizations will be able to meet their storage business needs now and in the future.
Tags : cloud management, cloud services, cloud management, it infrastructure, cloud application, cloud computing, infrastructure, technology, storage management, it management, data management
     CDW - NetApp
By: Hewlett-Packard     Published Date: Aug 02, 2013
If your organization is deploying a new server farm or cluster for any reason — a newly virtualized application or a growing business initiative, perhaps — this is the time to consider blade servers as a cost-effective alternative to traditional rack servers. In most use cases, you will find blade servers to be less expensive than rack servers for both the initial purchase as well as for long-term total cost of ownership (TCO). In addition, blades enable improvements in manageability, agility, scalability and power consumption.
Tags : blades, it costs, cost of ownership
     Hewlett-Packard
Start   Previous   1 2 3 4 5 6 7    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the Energy Efficiency Markets White Paper Library contact: Kevin@EnergyEfficiencyMarkets.com