SERVICE QUALITY IN HIGHER EDUCATION: EXPECTATIONS AND PERCEPTIONS OF STUDENTS

Ahmed Asim1 --- Naresh Kumar2+

1Lecturer, Avid College, Male Maldives, Maldives

2Professor, Global Entrepreneurship Research and Innovation Centre Universiti Malaysia Kelantan, Malaysia

ABSTRACT

Quality of services, as judged by students, has become paramount to attract and retain students in the competitive atmosphere of higher education provision in the Maldives. With increasing awareness, students demand wider choices. Postgraduate students were targeted to ascertain their expectations and perceptions about the quality of services at selected higher learning institution (HEI) in the Maldives. Quantitative data was collected using the SERVQUAL instrument. A cross-sectional survey design was employed, and 72 samples were obtained. The data were analyzed using the statistical package SPSS version 23. After application of Pearson correlation tests and multiple regression analysis, the findings revealed that there was a positive but a weak relationship between expectations and perceptions on all the five different dimensions of the SERVQUAL instrument. Interaction effects related to expectations and perceptions between four different groups were tested using MANOVAs. The results were detected only between groups of males and females and in all the other three pairs of groups, null hypotheses were accepted. Implications and suggestions for future research discussed.

Keywords:Higher education institution Expectations Perceptions Service quality Students Maldives.

ARTICLE HISTORY: Received:13 July 2018 Revised:1 August 2018 Accepted:3 August 2018 Published:7 August 2018.

Contribution/ Originality:This study is one of the very few studies which have investigated the expectations and perceptions of postgraduate students on service quality of higher education institution in Maldives.

1. INTRODUCTION

Service industries have become the driving force of developing economies, and the competition among these economies has rendered the quality of services the critical determinants of success. Especially, in a service like the provision of higher education, positive word-of-mouth feedback, the loyalty of graduates and courses that cater to the needs of prospective students play a crucial role in giving institutions a competitive advantage (Burns and Bush, 2006 ). Globalization and student awareness have enabled Maldivians to be informed about recent developments in higher education and quality of services to be minimally acceptable. For this reason, successful institutions continuously measure the quality of services they provide and work towards maintaining student needs. Students are regarded as the most valuable asset of any successful higher education institutions (HEIs) (Nell and Cant, 2014 ). Such measures will enable institutions to ascertain methods for improving expectations and build student loyalty. Lovelock et al. (2009 ) explain that student satisfaction is highly correlated with staff satisfaction and this sets a chain of relationships between satisfaction, retention, and productivity leading to revenue growth and profitability. Sickler (2013 ) confirms in a study of undergraduate perceptions of service quality that satisfaction is a strong predictor of student retention. Creating educational value to inspire greater confidence in students by offering value-laden benefits, providing specialized treatment and catering to their needs will enhance loyalty and reduce attrition by minimizing defection and dropouts (Arokiasamy and Abdullah, 2012 ; Dehghan, 2013 ).

The trend of customer-orientation is only slowly starting to be recognized in the Maldivian higher education sector. Khodayari and Khodayari (2011 ) observe that ‘customer' for describing service exchange in a university from the view of the student is unsuitable. From the popular marketing axiom ‘the customer is always right', it makes the university's position awkward since it retracts its immunity in many aspects of quality that are inherent in maintaining academic integrity. It follows therefore that if students are seen as ‘customers' the measurement of service quality with the intention of improving service quality may not be appropriate. Those who are against this view hold the position that acceptance of ‘student' as the ‘customer' does not necessarily have to nullify the formal conventional relationship of academics with students. Considering these opinions, it is shortsighted to disallow the idea of student as the customer to be applied in higher education institutions contexts assuming that experiences are not just contacted between academics and students, but it involves an array of wider experiences much like a micro-society.

Koni et al. (2013 ) present arguments to emphasize that there is a considerable pressure mounting on institutions, locally, towards the measurement of services that trigger student behaviours and attitude. In Australia, ATAR (Australian Tertiary Admission Rank) cut-off scores and in UK, UCAS (Universities and Colleges Admissions Service) tariff scores are set by universities to indicate the academic ranking of students of which they generally accept onto their programs. If we look at the universities demanding highest scores in both countries, they are the oldest and largest state universities. High scores are equated with high demands from students. The same situation is true for Maldives neighbouring countries like India and Sri Lanka. However, expectations and perceptions are not fixed about the quality of services. Agbor (2011 ) noted that there are different needs of students, which ultimately shape their satisfaction with services. Bringing out any strategic or tactical changes in services without getting into the mindset of students could ultimately be detrimental. The need to address increasing competition among the service providers of higher education in Maldives has led to urgent strategic importance to focus on students' expectations and perspectives of services they receive. It is imperative that we address the problem of dissonance between students' needs and strategic focus that translate into the tactical and operational effort.

Despite increased effort seen recently in academic papers measuring attitudes of people in fields like social sciences and marketing in the international front, this effort is not apparent in the dearth of research published in the Maldives. Further, some issues are pertinent to the implementation of measuring an elusive construct like service quality, which is difficult to define and measure.  As mentioned previously, economic forces and the influence of international colleges and universities are having a notable impact on student demands for quality services (Nell and Cant, 2014 ). These demands have been compounded by changing societal values, demands from employers for better skills and abilities of graduates and dwindling government funds to support state institutions. It is evident from intensive promotional offers put on sale from private colleges that the external environment of higher education in the Maldives is characterized by declining enrolments, increasing pressures of accountability of staff and declining trust in the quality of higher education (Aturupane et al., 2011 ; Aturupane and Shojo, 2012a ). The ultimate consequence of all these characteristics of higher education services has one crucial result – consumerization of the student in higher education. Arokiasamy and Abdullah (2012 ) reason that this business like the concept of ‘student as the customer sheds light on the importance of satisfaction for the higher education students.

The competition that occurs within the HEIs in the Maldives is inherently different and perhaps is true that there is no internal competition within the institutions. As an addition to external pressures, internal competition among faculties, schools and centres are constantly vying for higher intakes of students. More students can only be attracted by differentiating themselves positively and meeting student expectations. Student as the consumer is at the helm of strategic focus. Students apply or join higher education institutions with presupposed expectations understood through word-of-mouth or other promotional media. The experience of studying leads to perceptions which could catalyze further attitudes and decisions.  It should be noted that high enrollments consequently have the potential for budgetary allocations, resource procurement and human asset development within those faculties, schools and centres. Hence, understanding what students expect of service quality for an independent unit, faculty, school or centre is indispensable for its growth and development. Governing bodies like the Maldives Qualifications Authority (MQA) and Technical Vocational Education and Training Authority (TVETA) will be riveting their meticulous attention to academic service quality with better legislative empowerment in the near future. This will subject higher education providers to intense public scrutiny based on those bodies' independent reporting. In thriving HEIs, seeking client or student views is a proactive process (Kotler and Armstrong, 2008 ). It would be more prudent to bring out the necessary changes through intelligent market-sensing in emerging markets and be the leader rather than wait and be ultimately hampered by market pressures (Mbise and Tuninga, 2013 ).

Student retention and reducing attrition cannot be overstressed in a competitive higher education market as in the Maldives at present. HEIs can be successful in attracting, retaining and grabbing a good market share if the courses and services offered to meet student expectations. When options are abundant to suit student needs, dissatisfied students are very likely to defect to other institutions that provide the best possible opportunity for them. Kelso (2008 ) notes that HEIs have fallen short of efforts to focus on quality issues that defect students. He advocates student perceptions have to be assessed continuously to cater to the exacting requirements of ever more demanding students. Research initiatives that befit the Maldivian higher educational context to bring out service improvements are timely. Thus, in this article we have presented the outcomes of a study with the following two main objectives: 1) to identify the relationship between students' expectations and perceptions of quality services in specific dimensions of reliability, assurance, tangibility, empathy and responsiveness; and 2) to gauge the extent students' expectations and perceptions differ among different demographic characteristics.

2. METHODOLOGY

Service quality is a reality that can be understood and perceived by students depending on their experiences and current perceptions. This study employed a quantitative approach (cross-sectional survey) to measure the perception of students as benchmarked by an individual understanding of what satisfaction the nature of services bring to them. The target population for this study was postgraduate students enrolled on the Master of Education program at a selected HEI in Male, Maldives. To ensure confidentiality and research ethics, the actual name of the institute was maintained anonymous in this report. Data were collected from 72 respondents who voluntarily participated in this study using the well-established SERVQUAL instrument. SERVQUAL is one of the several instruments used in the service quality analysis. One of the major reasons why the instrument's popularity with researchers had been so high is its simplicity and the inherent characteristics that enable it to be adapted to different service sectors (Nyeck et al., 2002 ; Ibrahim et al., 2013 ). The instrument is grouped into two major sections, one to measure expectations concerning service quality and the other to measure perceptions about the service quality being evaluated (Atrek, 2012 ). Each of the two sections contains five distinct dimensions: tangibles, reliability, responsiveness, assurance, and empathy (Yeo and Li, 2014 ).  Moreover, Wang et al. (2015 ) comment that these dimensions represent the core evaluation criteria for organizations when measuring service quality. The questions are based on a 7-point Likert scale. To calculate the service quality under each dimension, the difference between perceptions and expectations of each item is found and all items summed up (Landrum et al., 2009 ).  Yun (2001 ); Nell and Cant (2014 ) and Hirmukhe (2012 ) comment SERVQUAL's validity, and reliability has been supported with sufficient empirical research. Nell and Cant (2014 ) report that it is the most commonly used scale to measure the quality of services. Cook and Thompson (2000 ) confirm the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy of 0.94. This is an excellent value of validity. The internal consistency of the tool was measured at 0.91 in a study by Daneil and Berinyuy (2010 ) which is similar to the results obtained by the original authors Parasuraman et al. (1988 ). The Cronbach's Alpha values for all items under expecations and perceptions dimension for the present study are well above .90 and thus the reported values for previous research are highly consistent with the values of Cronbach Alpha obtained for this study. The analysis of data have been carried out by employing a range of parametric tests that fall within the area of descriptive and inferential statistics. The IBM SPSS statistics version 23 has been used to compute the relevant statistical values in order to infer conclusions for the hypotheses predicted.

3. FINDINGS

3.1. Profile of Respondents

The purpose of this study was to use Gap Model and EDT to investigate master’s students’ expectations and perceptions of service quality using a popular instrument called SERVQUAL. The viewpoints of participants were also examined in respect to expectations and perceptions based on their age group, gender, year of enrollment or batch-wise, mode of study and entry criteria. This chapter presents the discussions that relate to the findings and interpretations derived from the statistical tests in an attempt to arrive at conclusions in relation to two research questions and six hypotheses as following: 

Question 1: What is the relationship between students' expectations and perceptions of quality services in specific dimensions of reliability, assurance, tangibility, empathy, and responsiveness?

H01: There is no statistically significant correlation between students' expectations and perceptions of the overall service quality.

H02: There is no statistically significant influence of any one or more dimensions that predict overall service quality.

Question 2: How do students’ expectations and perceptions differ among different demographic characteristics?

H03: There is no statistically significant difference in the expectations and perceptions of current students and graduates.

H04: There is no statistically significant difference in the expectations and perceptions of male and female students.

H05: There is no statistically significant difference in the expectations and perceptions of full-time and block-mode students.

H06: There is no statistically significant difference in the expectations and perceptions of students based on normal and alternative entry criteria.

The percentage of males in the survey is 49% (n=35), and the females constitute 51% (n=37) out of 72 respondents. Almost half of the respondents are above 34 years old (n=33), while about one third (n=22) falls into the age group 30-33 years, and the 26-29 age group consists of about one fifth or 24% (n=17) of respondents. Among the three different batches of students who responded, almost half (n=37) of them are from batch 2015 (51%). Only 17% (n=12) of participants responded from batch 2013, despite this batch (n=39) being large. There have been two entry criteria set on the questionnaire that all respondents could possibly be categorized into. They are the normal and the alternative entry criteria for master's courses. The normal entry criteria require a student to have a bachelor's degree for entry. Alternative entry allows students to study based on experience and a diploma or an advanced diploma level teaching qualification. Considering those who responded, just about three-fourths of participants (n=55, 76.4%) was allowed entry based on a degree while the other one fourth (n=17, 23.6%) were allowed based on a combination of diplomas and working experience.

3.2. General Findings for the Five Dimensions of SERVQUAL

In this section, the findings of each dimension are presented with key descriptive statistics under each dimension. The tangibility dimension (see Table 1) was presented with the first four items of the SERVQUAL instrument under both expectations and perceptions. This dimension attempted to measure student responses mostly about ‘tangible' aspects of service provision like information technology facilities and equipment, the appearance of buildings, staff and learning materials. Table 2 shows the descriptive statistics of each item and the mean values obtained under both sections of expectation and perception. The measurement (N=72) for this dimension gives a higher value for expectations (M=5.71, SD=1.11) than for perceptions (M=4.25, SD=1.03) and therefore providing an overall Gap Score that is negative (M=-1.47, SD=1.28). Looking at visual cues of the histogram and Q-Q plot, the overall Gap Score of tangibility seems generally normal. The negative skewness (S=-0.424) is apparent with the whole curve slightly moved to the left. There is only little kurtosis (K=-0.125) because the value is very close to zero.

The reliability dimension (see Table 1) was presented with five items of the SERVQUAL instrument under both expectations and perceptions. This dimension attempted to measure student responses that were related to the trustworthiness of service staff and their ability to deliver the service in a dependable and accurate manner. The measurement relates not only to teaching staff but also to every person who interacts with students. The descriptive statistics of each item and the mean values obtained under both sections of expectation and perception. The measurement (N=72) for this dimension gives a higher value for expectations (M=5.76, SD=1.28) than for perceptions (M=4.21, SD=1.06) and therefore an overall Gap Score that is negative (M=-1.55, SD=1.47). The histogram and Q-Q plot confirm that the overall Gap Score of reliability is very normal. Especially skewness is almost perfectly normal (S=0.03), and kurtosis (K=-0.685) is also close to zero.

The responsiveness dimension (see Table 1) was presented with four items of the SERVQUAL instrument under both expectations and perceptions. This dimension attempted to measure student responses that were related to the willingness and desire of staff to assist students and deliver prompt services, including provision of accurate information. The measurement (N=72) for this dimension gives a higher value for expectations (M=5.89, SD=1.16) than for perceptions (M=4.39, SD=1.09) and therefore an overall Gap Score that is negative (M=-1.50, SD=1.46). The histogram confirm that the overall Gap Score of responsiveness has unobservable positive skewness (S=0.351) and kurtosis (K=0.487). Further, Q-Q plot looks smooth on the straight line, except for a single outlier.

The dimension of ‘assurance’ (see Table 1) was presented with four items of the SERVQUAL instrument under both expectations and perceptions. This dimension focused on gleaning information more specifically about the academic aspects of services. This included the performance of academic staff, how knowledgeable they are and whether the qualification that is being pursued provided a quality that is worth the money spent on. The measurement (N=72) for this dimension gives a higher value for expectations (M=5.78, SD=1.19) than for perceptions (M=4.40, SD=1.11) and therefore an overall Gap Score that is negative (M=-1.38, SD=1.51). The overall Gap Score of assurance has unobservable positive skewness (S=0.107) and kurtosis (K=0.331).

The dimension of ‘empathy’ (see Table 1) had five items on the SERVQUAL instrument under both expectations and perceptions. This dimension concentrates on collecting student attitudes on the levels of caring and personalized attention the faculty provides to students. The focus is centered on faculty operating hours, the extent to which the faculty understands student needs and challenges. The measurement (N=72) for this dimension gives a higher value for expectations (M=5.39, SD=1.44) than for perceptions (M=3.79, SD=1.21) and therefore an overall Gap Score that is negative (M=-1.61, SD=1.55). The overall Gap Score for empathy has unnoticeable positive skewness (S=0.615). Even though kurtosis (K=1.707) is comparably higher for this dimension than the other four, it is not visible.

Table-1. Key descriptive statistics for the five dimensions.

N
Mean
Std. Deviation
Skewness
Kurtosis
Statistic
Expect
Percept
Expect
Percept
Expect
Percept
Expect 
Percept
Tangibility (items 1 – 4)
72
5.71
4.24
1.12
1.03
-1.37
-0.43
1.76
-0.16
Reliability (items 5 -9)
72
5.76
4.21
1.28
1.06
-1.33
0.007
1.58
-0.72
Responsiveness (items 10 – 13)
72
5.88
4.39
1.16
1.09
-1.58
-0.13
2.63
-0.73
Assurance (items 14 – 17)
72
5.78
4.4
1.19
1.11
-1.18
-0.09
0.0827
-0.8
Empathy (items 18 – 22)
72
5.39
3.78
1.44
1.21
-0.96
-0.04
-0.37
-0.5
Valid N
72

Source: Survey Data

The Table 2 shows the total mean expectation (N=72, M=5.7, SD=1.09) with skewness (S= -1.190) and kurtosis (K=0.824). Mean perception (N=72, M=4.20, SD=0.94) has skewness (S=-0.180) and kurtosis (K= -0.573) values very different to expectations.

Table-2. Descriptive statistics for the total mean expectation and perception

N
Mean
Std. Deviation
Variance
Skewness
Kurtosis
Statistic
Statistic
Statistic
Statistic
Statistic
Std. Error
Statistic 
Std. Error
Mean Expectation value 
72
5.7011
1.08987
1.188
-1.19
0.283
0.824
0.559
Mean Perception value
72
4.2007
0.93539
0.875
-0.18
0.283
-0.573
0.559
Valid N
72

Source: Survey Data

3.3. Tests for Hypotheses

In this section, the statistical tests needed for the discussions and information that will finally give way to arrive at conclusions are presented in a sequential manner. The first research question that this study is expected to answer is:  What is the relationship between students' expectations and perceptions of quality services in specific dimensions of reliability, assurance, tangibility, empathy, and responsiveness? Under this research question, two hypotheses were presented. The first hypothesis (H01) for this research question was there is no statistically significant correlation between students’ expectations and perceptions in the overall service quality. To obtain information to test this hypothesis, Pearson’s correlation was utilized to examine the relation between the mean expectation and the mean perception of service quality under each of the five dimensions of the SERVQUAL model.  The tangibility (r = .219, p = .065), responsibility (r = .166, p = .163) and assurance (r = .135, p = .257) dimensions showed a weak positive correlation and all were not significant. On the other hand, reliability (r = .284, p = .015) and empathy (r = .340, p = .003) dimensions revealed a weak positive correlation. Both the dimensions are recorded as significant (p < .005). The means of expectations and perceptions are positively correlated and significant (r = 0.253, p < 0.05). Therefore, the null hypothesis was rejected and it indicates that there is a statistically significant correlation between student’s expectations and perceptions in the overall service quality.

The second hypothesis (H02) set to answer the first research question states: there is no statistically significant influence of any one or more dimensions that predict overall service quality. Multiple regression analysis was conducted to statistically test if the Gap Scores in each dimension had an effect on the overall Gap Score or SERVQUAL value. After assumptions for the data had been confirmed through collinearity and normality tests and after visual inspection of those properties through histograms and scatterplots, the model summary confirmed that the overall Gap Score is strongly influenced by the Gap Scores of the five dimensions. With a high significance (p < .001) the responsiveness dimension was the strongest predictor and explained 86% of variations of the overall Gap Score. It was also found that the Gap Scores of five dimensions are highly correlated (r > .500, p < .001). With this result, the H02 was rejected as the model 5 (see Table 3) suggested that the five dimensions had a strong and almost an equal influence that predict the overall service quality or Gap Score (SERVQUAL value).

Table-3. Model summary for multiple regression

Change Statistics
Model
R
R Square
Adjusted R Square
Std. Error of the Estimate
R Square Change
F Change
df1
df2
Sig. F Change 
Durbin-Watson
1
.927a
0.86
0.858
0.46906
0.86
428.948
1
70
0
2
.960b
0.923
0.92
0.35107
0.063
55.957
1
69
0
3
.982c
0.956
0.963
0.23866
0.042
81.305
1
68
0
4
.991d
0.981
0.98
0.17431
0.017
60.478
1
67
0
5
1.000e
1
1
0
0.019
1
66
1.375
  1. Predictors: (Constant), Responsiveness GAP score
  2. Predictors: (Constant), Responsiveness GAP score, Empathy GAP score
  3. Predictors: (Constant), Responsiveness GAP score, Empathy GAP score, Tangibility GAP score
  4. Predictors: (Constant), Responsiveness GAP score, Empathy GAP score, Tangibility GAP score, Assurance GAP score
  5. Predictors: (Constant), Responsiveness GAP score, Empathy GAP score, Tangibility GAP score, Assurance GAP score, Reliability GAP score
  6. Dependent Variable: Overall GAP score or SERVQUAL Value

  Source: Survey Data

Reflecting on the outcomes of the hypotheses for the first research question, there is strong scientific backing that in the case of postgraduate students, expectations and perceptions of students are positively related (H01). In addition to this, it has been confirmed that all dimensions (hence, items in the questionnaire) positively contribute to Gap Scores (H02).

All dimensions identified and tested through the SERVQUAL instrument have been found to have a negative Gap Score (range from M= -1.61 to -1.36), which suggests that a collective improvement in all items of five dimensions are required. To improve things, it would be practical to ignore expectation ratings as we can instigate no influence on student impressions about their satisfaction. We could work to improve perceptions. Average perceptions on all dimensions ranged from 3.78 to 4.40. Given that there is a 7-point scale, the range revolves around the median; meaning the ranking is slightly better than mediocre for perceptions.

Gap Scores (GS) for this study are realistically significant scores that have to be trusted by virtue of statistics. To improve GS, we will need to focus on improving the poorly scaled items by students within those dimensions of the instrument. Applying the disconfirmation paradigm and Expectancy Disconfirmation Theory (EDT), this is a negative disconfirmation, technically suggesting that students are dissatisfied. The Gap Model raises a concern here in that dissatisfied students may complain if the level of GS or dissatisfaction falls below their ‘zone of indifference'. Therefore, identifying the areas of least satisfaction and improving them while trying to maintain the positive aspects of quality are vital. Empathy (M=-1.61) has the poorest GS and assurance (M=-1.38) the most positively ranked. Responsiveness was the strongest predictor for the overall GS, while reliability contributed the least. We can hence conclude that the items pertaining to the responsiveness dimension are currently being discharged best at the institute while the most unsatisfactorily executed items relate to the reliability dimension.

Delving into the root causes and correcting them is one of the objectives of this study. The dimension – responsiveness (M = -1.50)  – that relates to information dissemination, prompt response, and feedback for students, the willingness of staff to provide academic assistance and also the willingness to help students with their personal skills have had the most influence on the overall score. We can assume these are strengths of the institute that are most influential for student satisfaction even though the overall ranking is still negative. In order to further build on this dimension, the HEI  could put a concerted effort to improve timetabling issues, provide better coordination of events or meetings that have an impact on students, and improve all aspects of student liaison especially on sensitive issues like assignments and examinations. Improving the feedback and response mechanisms in communicating with students, demonstrating increased willingness to assist students when it is needed and further exhibiting the readiness to help improve students' personal and communication skills are some of the aspects that can enhance the positive effects of this dimension (responsiveness). It should be reiterated that multiple stakeholder perspectives through Harvey and Knight's framework is crucial in the analysis of decisions as faculty is accountable to and require the help of other stakeholders in its operations. For example in improving the timetabling issues, lecturers, administrative and the operations staff need to be actively involved. Improving response and feedback mechanisms with students require multiple media. This can include noticeboards, websites, leaflets, promotional adverts, emails and even phone calls. Timely updates of any news that will impact students will require simultaneous updating of the noticeboard, the website, the promotional materials, announcements on newspapers or TVs. We require the help and a coordinated effort of all stakeholders, both internal and external, in order to streamline any work done to enhance student satisfaction. 

Empathy, the dimension that is the second strongest predictor of GS, measures convenience, opening hours and tailored services to students. This dimension was ranked lowest (M = -1.61). This can be understood, especially considering that the course was pursued by working adults on a full-time load and a large number of students travelling from the atolls, tailored services that meet personal needs would not have been a practical endeavor to the HEI. Nonetheless, as a first step, getting the perspectives of stakeholders like lecturers and administrators through a consultative process would make changes to course delivery effective and earn their buy-in. Empathy being the second strongest predictor but being ranked lowest requires a very focused effort to improve the ranking of items listed in order to influence the GS positively. 

The tangibility (M = -1.47) dimension evaluated the more physical aspects of the faculty associated with service delivery and satisfaction of students. This dimension of SERVQUAL is also one of the four characteristics of a service, therefore directly linked. Tangibility assessed the acceptability of teaching and information and technology facilities, the appearance of the building and compound area, classrooms and also the quality of learning materials provided for the courses. Tangibility dimension carried the lowest R2 change (.017) to the GS, meaning it made the least impact on the GS.
Reliability (M = -1.55) dimension, next closest to the most lowly ranked empathy (M = -1.61) dimension is also the lowest predictor for the GS.  It also had the second lowest R2 change (.019) explaining its low contribution to GS. This dimension appraises the consultative process between staff and students in addition to aspects of trustworthiness. This can also include aspects like resolving issues, acting on commitments, timely responses and reliability of assessments.

Assurance (M = -1.38) dimension investigated the quality of the academic courses, the fitness of the courses for employment, the worthiness of money spent on courses in return for equivalent value, various support services for students and the lecturers' knowledge on the subjects being taught. This dimension is the best ranked or had the lowest negativity among all. Contrastingly, this dimension has the poorest contribution to GS or student satisfaction (R2 change = .017). Hence, it would be more prudent to concentrate effort on improving items that contribute more towards GS even though they have been ranked more positively. The R2 change suggests that efforts to improve the assurance dimension will have the lowest effect on student satisfaction. 

In a service organization, word-of-mouth can be damaging if negative, or a boost to the image if it is positive (Kotler and Armstrong, 2008 ). This is a good enough reason to improve satisfaction (quality) for any service business. In addition, satisfaction increases student retention and reduces attrition (Arokiasamy and Abdullah, 2012 ; Dehghan, 2013 ). Student retention can be improved and a higher market share achieved if student satisfaction can be improved. Higher satisfaction of students can be achieved by reducing GS. Even if we have no discretion in changing student expectations, we can change their perceptions which will reduce GS. This can only be achieved by increasing the levels of positive perceptions of service quality at the HEI. Perceptions can be changed through sensible service quality changes that students will appreciate. A student-oriented service offer is a key to achieving their satisfaction. We will need to investigate quality through the lens of students, taking the stakeholder perspectives of quality advocated by Harvey and Knight's framework. It is true that there are conflicting demands of different stakeholders as discussed in the literature review but it has been proven that no service excels without rienting itself to its customers (Lovelock et al., 2009 ).

In summary, to address this research question, it has to be reiterated that SERVQUAL covers a comprehensive range of issues. As the outcomes of the hypotheses have revealed, there is a positive correlation between expectations and perceptions, further it was also concluded that all dimensions strongly contribute towards GS (satisfaction or quality). Hence all are strong predictors for the Gap Score or overall service quality. This suggests that every item listed in the instrument should be heeded to improve service quality. For these reasons, a holistic quality strategy incorporating all items on the SERVQUAL instrument that are associated with processes and procedures (tangibility, reliability, responsiveness, assurance, empathy) and infrastructure including human assets (tangibles) need to be implemented at the HEI.  

The research question two states: how do students’ expectations and perceptions differ among different demographic characteristics? In order to review this research question and test the four hypotheses set under this question, a MANOVA was computed for each. The data was validated through various tests before computation. The first hypothesis states: there is no statistically significant difference in the expectations and perceptions of current students and graduates (H03). All MANOVA statistics (see Table 4) were non-significant for H03 and post-hoc Tukey’s HSD (Table 5) proved non-significant among all batches (2013, 2014, 2015) or groups of students. Hence, the null hypothesis is accepted as there is no significant difference in expectations and perceptions of current students and graduates.

Table-4. Multivariate test results for current students and graduates

Value
F
Hypothesis df 
Error df
Sig.
Pillai’s trace
0.043
0.759
4
138
0.553
Wilks’ lambda
0.957
.752a
4
136
0.559
Hotelling’s trace
0.044
0.744
4
134
0.564
Roy’s largest root
0.036
1.236b
2
69
0.297
  1. Exact statistic
  2. The statistic is an upper bound on F that yields a lower bound on the significance

Source: Survey Data

Table-5.Tukey's HSD post-hoc tests

Dependent Variable
Mean Difference (I-J)
Std. Error
Sig.
Mean Expectation value
2013
2014
0.2959
0.39203
0.732
2015
0.2307
0.36571
0.804
2014
2013
-0.2959
0.39203
0.732
2015
-0.0653
0.29231
0.973
2015
2013
-0.2307
0.36571
0.804
2014
0.0653
0.29231
0.973
Mean Perception value
2013
2014
-0.4749
0.32644
0.319
2015
-0.6743
0.30453
0.076
2014
2013
0.4749
0.32644
0.319
2015
-0.1994
0.24341
0.693
2015
2013
0.6743
0.30453
0.076
2014
0.1994
0.24341
0.693

  Based on observed means.

 Source: Survey Data

The second hypothesis states: there is no statistically significant difference in the expectations and perceptions of male and female students (H04). For this gender-based differences in expectations and perceptions of service quality, all MANOVA statistics (Table 6) computed was significant at p < .05. Between-subjects effects or ANOVAs (Table 7) confirmed that this is true for expectations (p=.008) but not for perceptions (p=.268). Since ANOVA is only a post-analysis check and MANOVAs have fewer errors in its computation in addition to all MANOVA statistics being significant, there is strong evidence to reject H04 and accept Ha4.

Table-6.Multivariate test results for males and females

Value
F
Hypothesis df 
Error df
Sig.
Pillai’s trace
0.099
3.783a
2
69
0.028
Wilks’ lambda
0.901
3.783a
2
69
0.028
Hotelling’s trace
0.11
3.783a
2
69
0.028
Roy’s largest root
0.11
3.783a
2
69
0.028
  1. Exact statistic

  Source: Survey Data

Table-7. Tests of between-subjects effects (males and females)

Source
Type III sum of Squares
df
Mean Square
F
Sig.
Sex
Mean Expectation value
8.072
1
8.072
7.409
0.008
Mean Perception value
1.088
1
1.088
1.248
0.268
Sex
Mean Expectation value
76.264
70
1.089
Mean Perception value
61.034
70
0.872
  1. R Squared = .096 (Adjusted R Squared = .083)
  2. R Squared = .018 (Adjusted R Squared = .003)

Source: Survey Data

The third hypothesis states: there is no statistically significant difference in the expectations and perceptions of full-time and block-mode students (H05). A MANOVA was not run in order to assess the interaction effects between full-time and block-mode students as the grouping of ‘full-time study and block-mode students’ has the same participants as that of ‘current students and graduates’ tested under hypothesis H03. Since for H03, the null hypothesis was accepted, we are bound to do the same for H05. Therefore, H05 is accepted since there is no statistically significant different in expectations and perceptions of full-time and block-mode students.

The fourth and last hypothesis set to answer the second research question states: there is no statistically significant difference in the expectations and perceptions of students based on normal and alternative entry criteria (H06). The MANOVA (Table 8) computed to address this hypothesis did not show any significance in any of the tests which were run. Between-subjects effects (Table 9) followed after the MANOVA also did not show any significance. Hence, the null hypothesis, H06, is accepted as there is no statistically significant difference is the expectations and perceptions of students based on the normal and alternative entry criteria.

Table-8. Multivariate test results for normal and alternative criteria

Value
F
Hypothesis df 
Error df
Sig.
Pillai’s trace
0.009
.312a
2
69
0.733
Wilks’ lambda
0.991
.312a
2
69
0.733
Hotelling’s trace
0.009
.312a
2
69
0.733
Roy’s largest root
0.009
.312a
2
69
0.733
  1. Exact statistic

Source: Survey Data

Table-9. Tests of between-subjects effects (normal and alternative entry criteria)

Source
Type III sum of Squares
df
Mean Square
F
Sig.
Entry_Cri
Mean Expectation value
0.153
1
0.153
0.128
0.722
Mean Perception value
0.532
1
0.532
0.605
0.439
Error
Mean Expectation value
84.182
70
1.203
Mean Perception value
61.59
70
0.88
  1. R Squared = .002 (Adjusted R Squared = -.012)
  2. R Squared = .009 (Adjusted R Squared = -.006)

  Source: Survey Data

Reflecting on the outcomes of the hypotheses for the second research question, there is strong scientific evidence to back that expectations and perceptions of service quality generally do not differ among students of different groups of students. It has been established that, at the HEI, the modality on which master’s students have been studying are unlikely to affect their expectations and perceptions of service quality (H05). This is also true for current students and to those who have graduated. Sentiments about the quality of services at the HEI have not significantly changed after students have graduated (H03). The view of quality services is also not different between groups of students based on which entry criteria the students have been allowed to enroll in master's courses (H06).

However, there is a significant difference in sensing the quality or satisfaction of services at the HEI by male and female students (H04). For example, expectations of services have been ranked significantly differently by males (M=5.36, SD=1.18) and females (M=6.03, SD=0.90). Expectations have been found to be very significant and consistent across all groups of students tested in the study. But, this is particularly high for females. In fact, expectations have been found to be too highly or unrealistically ranked in the survey. This partiality is seen visually on the distorted normal curve of the average expectations histogram.  Statistically, the average expectation has been significant at p < .001 on both the K-S and S-W statics indicating heterogeneity of variance. This is not an acceptable pretest for the data and is bound to give inaccurate statistical results. It is a requirement that homogeneity is maintained for accurate parametric calculations. Ideally, the expectations data could have been normalized for more accurate results before computations of statistics. Perhaps, on a positive note, it can be mentioned that a negative GS for the faculty was obtained because the expectations were unrealistically high. It has been found that females make up 51.4% of the cohort and 76% of the participants are above 30 years old. Almost all of them being working mothers it is likely that high expectations of services from the HEI could be what they would prefer. But on a more critical note this could well be the real situation. We need to do repeated cross-sectional surveys and across wider student bases, perhaps among faculties to obtain more inclusive views on student satisfaction. A mixed-methods approach employing focus groups and interviews will provide a platform to triangulate the results from the survey.

In brief to address the key implication of this research question, it has to be restated that demographics have unlikely ramifications on students' quality assessments. Under this research question, three out of four null hypotheses have been accepted suggesting demographical effects on student's perceptions are not as probable as might have been assumed in the study. One reason for this phenomena could be that multicultural, religious, ethnic and linguistic barriers do not hamper the way most of us think, unlike in countries where there is diversity. Sampling more faculties for replication studies might reveal some insight into demographical effects on students' perceptions on quality services.

4. IMPLICATIONS

Students being the key stakeholders of any higher education institution, to properly understand their needs and wants would be crucial to the success and growth of the institution in the longer term. In a competitive higher education environment, it requires catering to the exact requirement of the student, as much as possible (Kelso, 2008). This would be an efficacious means to earn student loyalty towards the HEI and facilitate the spread of positive word-of-mouth information, which is a powerful promotional conduit in a packed society like the Maldives.  Given the urgency to improve quality gaps at the HEI, it should carry the following implications of the study.

The senior management of the faculty should take quick decisive decisions to remove impediments to student satisfaction, especially, those that specifically relate to the items listed in the five dimensions of service quality on the SERVQUAL instrument. As the selected HEI was ranked with a negative Gap Score on all dimensions, it is necessary to focus on all of them. However, as resources allow, it would be important to focus on predictor dimensions that affect student satisfaction the most, starting from the responsiveness dimension. The HEI should adopt an open-door policy to note and act on student dissatisfactions under a regulated student-oriented system. Listening to students will not allow for changes if strategic policies that pin on improving student satisfaction do not give way. Without policies, there will be no budgets, human assets or any accountability measures. It is about time that the HEI employed a staff-performance appraisal based on student satisfaction levels in key areas of responsibility. Departments, lecturers, library assistants, and front office personnel who consistently provide high levels of student satisfaction should be encouraged to make a sustained effort through a motivational package.

Administrative bureaucracy stifles serving students and acting promptly on students' countless dissatisfaction. A functional audit of the faculty should be conducted to streamline processes and procedures. Self-help systems, automation, social media and an array of web-based solutions can be used to eliminate bloated bureaucratic policies and practices. Flexibility, choice, and options will increase staff as well as student satisfaction of services.   An open-door policy encourages students to reach relevant personnel with their difficulties. Just the act of listening can bring about satisfaction. For example, this aspect relates to one of the item (reliability on SERVQUAL) which mentions ‘Sincere intention in resolving student's problems and concerns'. However, if policies and procedures are not implemented in a timely manner, then it raises the potential for a cascading domino effect for dissatisfaction. For example, another item of reliability mentions ‘Fulfilling previous commitments/promises at the right time'. There are a number of such items on SERVQUAL that directly or indirectly relate to the need for pro-active engagement with students.

Further, the staff needs to be developed and trained so that they are aware of the importance of making students satisfied. This is very important for boundary spanning, entry level, staff who routinely interact with students. Staff working in student sensitive areas should be given maximum autonomy, flexibility, and their own prerogatives to find solutions to problems within a wide margin of operational authority.  The institution's Postgraduate Research Centre should actively engage in market research to ascertain student needs and set a mechanism where research is not only produced in journals alone but translated to its own policies.

5. Suggestions for Further Studies

Further studies using the same methodology and mixed methods to confirm the validity of outcomes from this study could be conducted in the future. Quantitative methods alone will not reveal the multitude of reasons why students have measured service quality the way they did in this study. Mixed methods will enable the researcher to understand in depth the root causes of student dissatisfactions and will enable the researcher to further query participants when statistics flag concerning issues. Maturity effects, education, changing economic conditions and globalization could influence student expectations and perceptions (Nell and Cant, 2014 ). This can directly influence the attitudes and evaluation of educational service providers. Therefore, continuous cross-sectional and longitudinal surveys are recommended to ensure that the faculty keeps pace with such changes and are also up-to-date with rapidly changing quality attributes most favoured by students. 

This study focused only on master's students and which is currently delivered only on block-mode. It would be safe to assume that different issues and dissatisfactions will be inherent in normally conducted courses at the HEI. Therefore, the wide sample could be used in a future replication study. This will also enable the senior management to make wide relevant changes to policies as opposed to changes that focus only on master's students. This will be an enlightenment and also an encouragement to the HEI council and relevant committees to bring about institute-wide changes. In fact, it would be costly, ineffective and shortsighted to bring HEI level major strategic changes without considering the whole institution at large. Students are the key stakeholders of the HEI. A multi-stakeholder assessment using a combination of SERVQUAL and Harvey and Knight's framework would enable the HEI to make decisions that cater to the needs of all its stakeholders.

The SERVQUAL is a tool that bases its evaluation on the Gap Model (Donlagic and Fazlic, 2015 ). Gap model uses the difference in perceptions and expectations to come up with a gap score. This is how the tool quantifies levels of the quality measure. However, as discussed in the literature review, some researchers have expressed criticism towards this model and the tool because they believe that the construct of expectation cannot be measured accurately. The different schools that students studied expose them to different experiences. Their experiences shape their expectations. For this reason, those who criticized the SERVQUAL measurement had come up with the SERVPERF measurement, which is based only on the perception of the student. Abdullah (2006 ) claims this tool has higher internal consistency reliability scores and explains more variance in the overall measure than the SERVQUAL tool. It could, therefore, be recommended that this research is replicated with a perception only tool like the SERVPERF or a higher education specific tool, HEdPERF, so that outcomes can be triangulated.

6. Conclusion

The fundamental aims of this study had been to identify the extent of satisfactory services provided by the faculty and then make possible recommendations for the provision of services in the identified areas or poor satisfaction. In meeting the study objectives, specific gaps identified through SERVQUAL dimensions have been mentioned in the discussion section. The study of service quality from the perspectives of master's students at the selected HEI provided illuminating insights into the minds of students on quality and satisfaction issues. In the light of findings and discussions, quality gaps have been identified. As such, sensitive areas to focus on improvements in areas where students have poorly ranked need to be addressed. The hypotheses supported that the SERVQUAL measured valid measurements of student sentiments, most of them at statistically significant levels. It also offered an understanding of interaction effects based on different student groups. In brief, the two research questions have been satisfactorily addressed based on the outcome of the hypotheses.

Funding: No financial support received for this study. 
Competing Interests: The authors declare that they have no competing interests. 
Contributors/Acknowledgement: We would like to thank Ms Azeema Abdulla who has assisted in completing the research.

REFERENCES

 Abdullah, F., 2006. Measuring service quality in higher education: HEdPERF versus SERVPERF. Marketing Intelligence and Planning, 24(1): 31–47.View at Google Scholar | View at Publisher

 Agbor, J.M., 2011. The relationship between customer satisfaction and service quality: A study of three service sectors in Umeå. Unpublished Master's Dissertation, Umeå School of Business, Umeå, Sweden.

 Arokiasamy, A.R.A. and A.G. Abdullah, 2012. Service quality and students' satisfaction at higher learning institutions: A case study of Malaysian university competitiveness. International Journal of Management and Strategy, 3(5): 1–16.View at Google Scholar 

Atrek, R.A.D.B., 2012. Is there a need to develop a separate service quality scale for every service sector? Verification of SERVQUAL in higher education services. Süleyman Demirel University Journal of Faculty of Economics and Administrative Sciences, 17(1): 423–440. View at Google Scholar 

Aturupane, H., J. Fielden, S. Mikhail and M. Shojo, 2011. Higher education in the Maldives: An evolving seascape. Washington, DC: World Bank.

 Aturupane, H. and M. Shojo, 2012a. Enchancing the quality of education in Maldives: challenges and prospects. The World Bank. Retrieved from http://www.worldbank.org/en/country/maldives/research .

Burns, A.C. and R.F. Bush, 2006. Marketing reserach. 5th Edn., New Jersey, NJ: Pearson Education, Inc.

Cook, C. and B. Thompson, 2000. Reliability and validity of SERVQUAL scores used to evaluate perceptions of library service quality. Journal of Academic Librarianship, 26(4): 248–258. View at Google Scholar | View at Publisher

Daneil, C.N. and L.P. Berinyuy, 2010. Using the SERVQUAL model to assess service quality and customer satisfaction. (Unpublished Master's Dissertation, Umeå School of Business, Umeå, Sweden).

Dehghan, A., 2013. Service quality and loyalty: A review. Modern Management Science & Engineering, 1(2): 197-208.View at Google Scholar 

Donlagic, S. and S. Fazlic, 2015. Quality assessment in higher education using the SERVQUAL model. Management, 20(1): 39–57. View at Google Scholar 

Hirmukhe, J., 2012. Measuring internal customer’s perception on service quality using SERVQUAL in administrative services. International Journal of Scientific and Research Publications, 2(3): 1-6. View at Google Scholar 

Ibrahim, E., L.W. Wang and A. Hassan, 2013. Expectations and perceptions of overseas students towards service quality of higher education institutions in Scotland. International Business Research, 6(6): 20–30. View at Google Scholar | View at Publisher

Kelso, R.S., 2008. Measuring undergraduate student perceptions of service quality in higher education. Graduate Theses and Dissertations, University of South Florida.

Khodayari, F. and B. Khodayari, 2011. Service quality in higher education. Interdisciplinary Journal of Research in Business, 1(9): 38–46. View at Google Scholar 

Koni, A., K. Zainal and M. Ibrahim, 2013. An assessment of the services quality of Palestine. International Education Studies, 6(2): 33-48. View at Google Scholar | View at Publisher

Kotler, P. and G. Armstrong, 2008. Principles of marketing. 12th Edn., Delhi, India: Nutech Potolithographers.

Landrum, H., V. Prybutok, X. Zhang and D. Peak, 2009. Measuring IS system service quality with SERVQUAL: Users’ perceptions of relative importance of the five SERVPERF dimensions. International Journal of an Emerging Transdiscipline, 12(1): 17-35.View at Google Scholar | View at Publisher

Lovelock, C., J. Wirtz and J. Chatterjee, 2009. Services marketing: People, technology, strategy. 8th Edn., Delhi, India: Dorling Kindersley (India) Pvt Ltd.

Mbise, E.R. and R.S. Tuninga, 2013. The application of SERVQUAL to business schools in an emerging market: The case of Tanzania. Journal of Transnational Management, 18(2): 101–124. View at Google Scholar | View at Publisher

Nell, C.E. and M.C. Cant, 2014. Determining student perceptions regarding the most important service features and overall satisfaction with the service quality of a higher education institution. Management, 19(2): 63–87. View at Google Scholar 

Nyeck, S., M. Morales, R. Ladhari and F. Pons, 2002. 10 years of service quality measurement: Reviewing the use of the SERVQUAL Instrument. The Bi-Annual Academic Publication of Universidad ESAN, 7(13): 101-107. View at Google Scholar 

Parasuraman, A., V.A. Zeithaml and L.L. Berry, 1988. SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(1): 12-40. View at Google Scholar 

Sickler, S.L., 2013. Undergraduate student preceptions of service quality as a predictor of student retention in the first two years. Unpublished Doctoral Thesis, Bowling Green State University, Ohio, USA.

Wang, Y.L., U.O.R.T. Luor, P. Luaran and H.P. Lu, 2015. Contribution and trend to quality research – a literature review of SERVQUAL model from 1998 - 2013. Informatica Economica, 19(1): 34–45. View at Google Scholar | View at Publisher

Yeo, R.K. and J. Li, 2014. Beyond SERVQUAL: The competitive forces of higher education in Singapore. Total Quality Management, 25(1-2): 95–123. View at Google Scholar | View at Publisher

Yun, S., 2001. Assessment of guest satisfaction of service quality of the hotel. Unpublished Masters Dissertation, University of Wisconsin-Stout.