Hibernate Named Query Introduction Tutorial

  • If we are using HQL or Native SQL queries multiple time then it will be code mess because all queries will be scattered throughout the project.
  • Hibernate Named Query is a way to use queries by giving a name.
  • Hibernate Named Queries will be defined at one place and used anywhere in the project.
  • use of named query in hibernate
  • Writing HQL query in HBM file is called HQL Named Query.
  • Or we can use @NameQuery annotation in entity.
  • For writing Hibernate Named Queries we will use <query> tag in Hibernate mapping file Or @NameQuery annotation in the entity.
  • If we want to create Named Query using hibernate mapping file then we need to use query element.



Advantages of Named Query in Hibernate 

  • Global access
  • Easy to maintain.

Hibernate Named Queries by using Annotations:

  • If we want to create Named Queries using annotations in entity class then we need to use @NameQueries and @NameQuery annotations
  • @NameQuery will be used to create single Query
  • @NameQueries annotations will be used to create multiple Queries. When we are using @NameQueries for every query we need to use @NameQuery annotation.

Hibernate Named Query example by using Annotations:


Hibernate Named Query example join

  1. Query query = session.getNamedQuery("findDocterById");
  2.         query.setInteger("id", 37);
  3.        List empList = query.list();


Hibernate Named Queries by using Hibernate mapping file:

  • We need to configure Hibernate Named Queries as part of Hibernate mapping file.
  • By using <query> element we need to write Hibernate named Queries.

  1. <hibernate_mapping>
  2.             <class  >
  3.             ---------
  4.             </class>
  5.  
  6.  <query name = “findDocterById”>
  7.  <![CDATA[from Docter s where s.id = :id]]>
  8. </query>  

  9.  </hibernate_mapping>     


  1. Query query = session.getNamedQuery("findDocterById");
  2.         query.setInteger("id", 64);
  3.        List empList = query.list();

Hibernate Criteria Query Language (HCQL)

  • In Hibernate we pull the data from data base in two ways.
  • session.get()/ session.load()
  • Hibernate query langugae
  • Now we will discuss about the third way Hibernate criteria query language which solves the problems of above two approaches.




Hibernate Criteria Query Language / HCQL

  • In order to fetch the records based on some criteria we use Hibernate Criteria Query Language.
  • We can make select operation on tables by applying some conditions by using HCQL.
  • Criteria API is the alternative to HQL in object oriented approach.  
  • We can execute only SELECT statements using Criteria; we can’t execute UPDATE, DELETE statements using Criteria.

Advantages of  Hibernate Criteria Query Language (HCQL) 

  • Criteria API allows us to define criteria query object by applying rules ,filtration and logical conditions
  • So that we can add criteria to query.
  • Criteria is also database independent, Because it internally generates HQL queries.
  • Criteria is suitable for executing dynamic queries
  • Criteria API also include Query by Example (QBE) functionality for supplying example objects.
  • Criteria also includes projection and aggregation methods
     

Criteria Interface:

  • Criteria Interface has some methods to specify criteria.
  • Hibernate session interface has a method named createCriteria() to create criteria.

  1. public interface Criteria  extends CriteriaSpecification

  1. Criteria criteria = session.createCriteria(Student.class);
  2. List<Student> studentList= criteria .list();

Methods of Criteria interface:

  
Hibernate add crieria example hcql


Order Class

  1. public class Order extends Object implements Serializable
 
  • By using Order class we can sort the records in ascending or descending order.
  • Order class provides two different methods to make ascending and descending.
  1. public static Order asc(String propertyName)
  2. public static Order desc(String propertyName)

Hibernate Criteria query example using order class

  1.  Criteria criteria = session.createCriteria(Product.class)
  2.  
  3. // To sort records in descening order
  4. criteria.addOrder(Order.desc("price"));
  5.  
  6. // To sort records in ascending order
  7. criteria.addOrder(Order.asc("price"));

Restrictions Class

  1. public class Restrictions extends Object

  • Restrictions class provides methods that can be used  add restrictions (conditions) to criteria object.
  • We have many methods in Restrictions class some of the commonly using methods are.

hibernate criteria add example interview questions


Hibernate Criteria query example using Restrictions class


  1.  Criteria criteria = session.criteriaeateCriteria(Product.class);
  2. // To get records having price more than 3000
  3. criteria.add(Restrictions.gt("price", 3000));
  4.  
  5. // To get records having price less than 2000
  6. criteria.add(Restrictions.lt("price", 2000));
  7.  
  8. // To get records having productName starting with criteriaite
  9. criteria.add(Restrictions.like("productName", "criteriaite%"));
  10.  
  11. // Case sensitive form of the above restriction.
  12. criteria.add(Restrictions.ilike("productName", "zara%"));
  13.  
  14. // To get records having price in between 1000 and 2000
  15. criteria.add(Restrictions.between("price", 1000, 2000));
  16.  
  17. // To check if the given property price is null
  18. criteria.add(Restrictions.isNull("price"));
  19.  
  20. // To check if the given property is not null
  21. criteria.add(Restrictions.isNotNull("price"));
  22.  
  23. // To check if the given property price is empty
  24. criteria.add(Restrictions.isEmpty("price"));
  25.  
  26. // To check if the given property price is not empty
  27. criteria.add(Restrictions.isNotEmpty("price"));
  28.  
  29. List results = criteria.list();

Pagination using Hibernate Criteria

  • By using criteria methods setFirstResult() and setMaxResults() we can achieve pagination concept.


  1. Criteria criteria = session.createCriteria(Product.class);
  2. criteria.setMaxResults(10);
  3. criteria.setFirstResult(20);

Projections class in Hibernate

  1. public final class Projections extends Object

  • By Using org.hibernate.criterion.Projections  class methods we can perform operations like minimum , maximum, average , sum and count.

Hibernate Criteria query example using Projection class

  1. Criteria criteria = session.criteriaeateCriteria(Product.class);
  2.  
  3. // To get total row count.
  4. criteria.setProjection(Projections.rowCount());
  5.  
  6. // To get average price.
  7. criteria.setProjection(Projections.avg("price"));
  8.  
  9. // To get distinct countof name
  10. criteria.setProjection(Projections.countDistinct("name"));
  11.  
  12. // To get maximum price
  13. criteria.setProjection(Projections.max("price"));
  14.  
  15. // To get minimum price
  16. criteria.setProjection(Projections.min("price"));
  17.  
  18. // To get sum of price
  19. criteria.setProjection(Projections.sum("price"))

Hibernate Query Langugae (HQL)

Hibernate Query Language:

  • HQL is one of the feature of Hibernate.
  • HQL is same like SQL but here it uses class name as table name and variables as columns.
  • HQL is Database independent query language.
  • An object oriented form of SQL is called HQL
  • HQL syntax is very much similar to SQL syntax.  Hibernate Query Language queries are formed by using Entities and their properties, where as SQL quires are formed by using Tables and their columns.
  • HQL Queries are fully object oriented..
  • HQL Queries are case sensitive.


Advantages of HQL:

  • Database Independent
  • HQL Queries supports inheritance and polymorphism
  • Easy to learn. 

Query Interface:

  • Query interface used to Represent HQL query in the form of query object.
  • The object of  query will be created by calling createQuery(hql query) method of session.

  1.  Session hsession = sf.openSession();
  2. Query query = hsession.createQuery(hql query);

  • Send the query object to hibernate software by calling the list method.
  • Hibernate returns an ArrayList object render the ArrayList object and display the output to client.

  1. List l = query.list();
  2. ArrayList emplist = (ArrayList)l;

  • Query is an interface which is available as port of org.hibernate package. We can not create the object to query interface. We can create a reference variable and it holds implementation class object.

 Methods of Query Interface:

HQL hibernate query language

Advantages and disadvantages of hibernate compared to jdbc

Advantages of Hibernate over JDBC:

  1. Hibernate is an ORM tool
  2. Hibernate is an open source framework.
  3. Better than JBDC.
  4. Hibernate has an exception translator , which converts checked exceptions of JDBC in to unchecked exceptions of hibernate. So all exceptions in hibernate are unchecked exceptions and Because of this no need to handle exceptions explicitly.
  5. Hibernate supports inheritance and polymorphism.
  6. With hibernate we can manage the data stored across multiple tables, by applying relations(association)
  7. Hibernate has its own query language called Hibernate Query Language. With this HQL hibernate became database independent.
  8. Hibernate supports relationships like One-To-One, One-To-Many, Many-To-One ,Many-To-Many.
  9. Hibernate has Caching mechanism. using this number of database hits will be reduced. so performance of an application will be increases.
  10. Hibernate supports lot of databases.
  11. Hibernate supported databases List.
  12. Hibernate is a light weight framework because hibernate uses POJO classes for data transfer between application and database.
  13. Hibernate has versioning and time stamp feature with this we can know how many number of times data is modified.
  14. Hibernate also supports annotations along with XML.
  15. Hibernate supports Lazy loading.
  16. Hibernate is easy to learn it is developers friendly.
  17. The architecture is layered to keep you isolated from having to know the underlying APIs.
  18. Hibernate maintains database connection pool.
  19. Hibernate  has Concurrency support.
  20. Using Hibernate its Easy to maintain and it will increases productivity


Disadvantages of Hibernate Compared to JDBC!!:
  • Hibernate is slow compared to JDBC because of generating many sql queries at run time but this is not considered as dis advantage in my view.
  • Below are some of the dis advantages but these are not applicable to small applications. But we have given some possible scenarios. 

advantages and disadvantages of hibernate

Hibernate supported databases List

  • Hibernate is an ope source framework and also called as an ORM tool.
  • Hibernate supports lot of databases. 
  • Please find below list of databases that are supported by hibernate.
  • Hibernate supported databases list 


  1. DB2    
  2. DB2 AS/400   
  3. DB2 OS390 
  4. FrontBase
  5. Firebird  
  6. HypersonicSQL 
  7. H2 Database  
  8. Informix   
  9. Ingres  
  10. Interbase
  11. MySQL5    
  12. MySQL5 with InnoDB    
  13. MySQL with MyISAM    
  14. Mckoi SQL
  15. Microsoft SQL Server 2000    
  16. Microsoft SQL Server 2005  
  17. Microsoft SQL Server 2008  
  18. Oracle
  19. Oracle 9i   
  20. Oracle 10g    
  21. Oracle 11g      
  22. PostgreSQL  
  23. Progress   
  24. Pointbase
  25. SAP DB
  26. Sybase
  27. Sybase 

  •  Following is the list of various important databases dialects for corresponding database.

hibernate supported databases

Big data Hadoop interview questions answers freshers and experienced - Part 2

Big data hadoop interview questions and answers freshers and experienced



Hadoop interview questions and answers for freshers and experienced - Part 1


31.Using linux command line. how will you Copy file from your local directory to HDFS

  • hadoop fs -put localfile hdfsfile

32.What platforms and Java versions does Hadoop run on?

  •  Java 1.6.x or higher, preferably from Sun. Linux and Windows are the supported operating systems, but BSD, Mac OS/X, and OpenSolaris are known to work. (Windows requires the installation of Cygwin).



33.Is there an easy way to see the status and health of a cluster?

  • There are web-based interfaces to both the JobTracker (MapReduce master) and NameNode (HDFS master) which display status pages about the state of the entire system. 
  • By default, these are located at http://job.tracker.addr:50030/ and http://name.node.addr:50070/.
  • The JobTracker status page will display the state of all nodes, as well as the job queue and status about all currently running jobs and tasks.
  • The NameNode status page will display the state of all nodes and the amount of free space, and provides the ability to browse the DFS via the web.
  • You can also see some basic HDFS cluster health data by running:
  • $ bin/hadoop dfsadmin –report

34.Do I have to write my job in Java?

  • No. There are several ways to incorporate non-Java code.

35.How do I submit extra content (jars, static files, etc) for my job to use during runtime?

  • The distributed cache feature is used to distribute large read-only files that are needed by map/reduce jobs to the cluster. The framework will copy the necessary files from a URL (either hdfs: or http:) on to the slave node before any tasks for the job are executed on that node.
  • The files are only copied once per job and so should not be modified by the application.
  • Copying content into lib is not recommended and highly discouraged. Changes in that directory will require Hadoop services to be restarted.
36.How do I change final output file name with the desired name rather than in partitions like part-00000, part-00001?

  • You can subclass the OutputFormat.java class and write your own. You can look at the code of TextOutputFormat MultipleOutputFormat.java etc. for reference. It might be the case that you only need to do minor changes to any of the existing Output Format classes.
  • To do that you can just subclass that class and override the methods you need to change.

37.How do you gracefully stop a running job?

  • hadoop job -kill <JOBID>

38.How the HDFS Blocks are replicated?

  • A. HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. 
  • The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. 
  • The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a data block on HDFS, 2 copies are stored on data nodes on same rack and 3rd copy on a different track.

39.How the Client communicates with HDFS?

  • A.  The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file on HDFS. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
  •  Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.

40.What is HDFS Block size? How is it different from traditional file system block size?
  • In HDFS data is split into blocks and distributed across multiple nodes in the cluster.
  • Each block is typically 64Mb or 128Mb in size. Each block is replicated multiple times.
  • Default is to replicate each block three times. Replicas are stored on different nodes.
  • HDFS utilizes the local file system to store each HDFS block as a separate file. HDFS
  • Block size can not be compared with the traditional file system block size.

41.When is the reducers are started in a MapReduce job?
  • In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.

42.If reducers do not start before all mappers finish then why does the progress on Map Reduce job shows something like Map(60%) Reduce(15%)? Why reducers progress percentage is displayed when mapper is not finished yet? 
  • Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. 
  • The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer.
  • Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.

43.What is the Hadoop MapReduce API contract for a key and value Class?
  • The Key must implement the org.apache.hadoop.io.WritableComparable interface.
  • The value must implement the org.apache.hadoop.io.Writable interface.

44.What are combiners? When should I use a combiner in my MapReduce Job?
  • Combiners are used to increase the efficiency of a MapReduce program. 
  • They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers.
  • You can use your reducer code as a combiner if the operation performed is commutative and associative. 
  • The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your
  • MapReduce jobs should not depend on the combiners execution.

45.Where is the Mapper Output (intermediate kay-value data) stored ?
  • A. The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. 
  • This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.

46.Name the most common InputFormats defined in Hadoop? Which
one is default ?
 
  • Following 2 are most common InputFormats defined in Hadoop
  1. TextInputFormat
  2. KeyValueInputFormat
  3. SequenceFileInputFormat
  • TextInputFormatis the hadoop default

47. What is the difference between TextInputFormat and KeyValueInputFormat class
  • TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper KeyValueInputFormat: Reads text file and parses lines into key, val pairs.
  • Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

48. What is InputSplit in Hadoop

  • When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split

49. How is the splitting of file invoked in Hadoop Framework
  • It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user

50. Consider case scenario: In M/R system,
  • HDFS block size is 64 MB
  • Input format is FileInputFormat
  • We have 3 files of size 64K, 65Mb and 127Mb
  • then how many input splits will be made by Hadoop framework?
  • Hadoop will make 5 splits as follows
  • 1 split for 64K files
  • 2 splits for 65Mb files
  • 2 splits for 127Mb file

51. What is the purpose of RecordReader in Hadoop
  • The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat

52. After the Map phase finishes, the hadoop framework does
"Partitioning, Shuffle and sort". Explain what happens in this phase?

  • Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same
  • Shuffle
  • After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.
  • Sort
  • Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer

53. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer?
  • The default partitioner computes a hash value for the key and assigns the partition based on this result.

54. What is a Combiner
  • The Combiner is a "mini-reduce" process which operates only on data generated by a mapper.
  • The Combiner will receive as input all data emitted by the Mapper instances on a given node.
  • The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

55. Give an example scenario where a combiner can be used and where it cannot be used
  • There can be several examples following are the most common ones
  • Scenario where you can use combiner
  • Getting list of distinct words in a file
  • Scenario where you cannot use a combiner
  • Calculating mean of a list of numbers

56.What is job tracker
  • Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

57. What are some typical functions of Job Tracker
  • The following are some typical tasks of Job Tracker
  • Accepts jobs from clients
  • It talks to the NameNode to determine the location of the data
  • It locates TaskTracker nodes with available slots at or near the data
  • It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker

58.What is task tracker
  • Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker

59. Whats the relationship between Jobs and Tasks in Hadoop
  • One job is broken down into one or many tasks in Hadoop.

60. Suppose Hadoop spawned 100 tasks for a job and one of the task
failed. What will hadoop do ?
  • It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job

61.Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this 
  • Speculative Execution
62. How does speculative execution works in Hadoop
  • Job tracker makes different task trackers process same input. 
  • When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs.
  • The Reducers then receive their inputs from whichever Mapper completed successfully, first.

Select Menu