Databases on Cloud

Cloud Storage provides whatever amount of storage you require, on an immediate basis. It is persistent. It can be accessed in a variety of ways, both in the data center where the cloud is housed, as well as via the Internet. If you obtain this from an external provider, it is purchased on a pay as you go basis. You do not manage it, you use it, and the service provider manages it.

Cloud systems should be geographically dispersed to reduce their vulnerability due to earthquakes and other catastrophes, which increase technical challenge on a great level of distributed data interpretability and mobility. Data interoperability is even more essential in the future as one component of a multi-faceted approach to many applications; many open challenges still remain such as cloud data security and the efficiency of query processing in the cloud.

Cloudy Despite the potential cost advantages, cloud-based implementations of the functionality found in traditional databases face significant new challenges, and it appears that traditional database architectures are poorly equipped to operate in a cloud environment. For example, a modern database system generally assumes that it has control over all hardware resources (so as to optimize queries) and all requests to data (so as to guarantee consistency). Unfortunately, this assumption limits scalability and flexibility, and does not correspond to the cloud model where hardware resources are allocated dynamically to applications based on current requirements. Furthermore, cloud computing mandates a loose coupling between functionality (such as data management) and machines. Cloudy is a vehicle for exploring design issues such as relaxed consistency models and the cost efficiency of running transactions in the cloud. One key idea is to employ a reservation pattern in which updates are reserved before they are actually committed – in some sense, a generalization of 2-phase commit in which the ability to commit is reserved before the actual commit itself.

Generic structures allow the creation of an arbitrary number of tables with arbitrary shapes. A Universal Table is a generic structure with a Tenant column, a Table column, and a large number of generic data columns. The data columns have a flexible type, such as VARCHAR, into which other types can be converted. The nth column of each logical source table for each tenant is mapped into the nth data column of the Universal Table. As a result, different tenants can extend the same table in different ways. By keeping all of the values for a row together, this approach obviates the need to reconstruct the logical source tables. However it has the obvious disadvantage that the rows need to be very wide, even for narrow source tables, and the database has to handle many null values. While commercial relational databases handle nulls fairly efficiently, they nevertheless use some additional memory. Perhaps more significantly, fine-grained support for indexing is not possible: either all tenants get an index on a column or none of them do. As a result of these issues, additional structures must be added to this approach to make it feasible

Cloud based applications need high scalability and availability at low and controlled cost. In the 1960s, most of the database applications were used to maintain cash flow i.e. simple debit and credit transactions. For any organization it was easy to spend a large portion of the IT budged on database software and administration. In the meantime, applications have been changed and there is tremendous growth in data and databases only solve a relatively small fraction of problem. As of today, utility computing is not limited only to the single database system for support and high performance, it requires many interactive applications.

Join GJIMT now and jump start your career with strong foundation and life long placements!