Blog focuses on Linux, open source software, cyber security, futuring, innovation, systems engineering, Africa, and other items related to technology.
Monday, February 22, 2016
Wednesday, February 17, 2016
An Examination of the Prior Use of E-Learning Within an Extended Technology Acceptance Model and the Factors That Influence the Behavioral Intention of Users to Use M-Learning
An Examination of the Prior Use of E-Learning Within an Extended Technology Acceptance Model and the Factors That Influence the Behavioral Intention of Users to Use M-Learning
Jonathan Abramson, Maurice Dawson, Jeffery Stevens
DOI: 10.1177/2158244015621114 Published 21 December 2015
Abstract
The purpose of this empirical study was to test specific factors of behavioral intention to use m-learning in a community college setting using a modified technology acceptance model and antecedent factors suggested by the researcher’s review of the literature. In addition, the study’s purpose was to expand understanding of behavioral intention to use m-learning and to contribute to the growing body of research. This research model was based on relevant technology acceptance literature. The study examines the significance of “prior use of e-learning” and correlation with the behavioral intention to use m-learning. Existing models have looked at prior use of e-learning in other domains, but not specifically m-learning. Other models and studies have primarily looked at the prior use of e-learning variable as a moderating variable and not one that is directly related to attitude and behavioral intention. The study found that there is a relationship between prior use of e-learning and behavioral intention to use m-learning. This research direction was proposed by Lu and Viehland.
The growth of the Internet and mobile wireless technologies, and the acceptance of electronic learning (e-learning) have provided a foundation for the growth of mobile learning (m-learning) and formed a context in which it can co-exist and expand educational opportunities (Hoppe, Joiner, Milrad, & Sharples, 2003; Massey, Ramesh, & Khatri, 2006). This introduction examines the concept of m-learning and how the technology is viewed and framed in higher education and how the individual looks at this technology. Little research has been done on m-learning adoption factors, although many studies have been completed on wireless service adoption and other areas, which offer some insight into the study, as the mobile wireless industry is related to m-learning from a technological perspective. Such studies have proven useful in understanding adoption factors and intentions that make the technology more useful and user friendly. These studies have been driven by new research into the context of m-learning as mobile devices have become the main source of communication device for college students and other groups (Lu & Viehland, 2008; Walker & Jorn, 2009). The studies suggest that the Technology Acceptance Model (TAM) developed by Davis in 1989 is useful in determining the correlation and effects of antecedent variables on behavioral intentions (BI) to use wireless devices in many organizational contexts for a variety of purposes. To support this point, it is estimated that more than 500 million smartphones were purchased worldwide in 2011 (Weintraub, 2010) and that they will represent the majority of purchased cellular devices by 2011 (Entner, 2010). Smartphone sales are also expected to be greater than that of personal computers in 2012 (Brownlow, 2012). The number of smartphone users worldwide is expected to surpass 2 billion in 2016 (Curtis, 2014). These facts have made studying mobile devices critical for many areas both inside and outside of education.
The task–technology fit needs to be understood at a more comprehensive level than it currently is; universities and schools need empirical research of m-learning to make decisions on its use and implementation, and most importantly, its effectiveness. Just because a technology is widely available, does not mean that it will be used or adopted (Liu, Han, & Li, 2010). This idea has been the premise of adoption research in all systems research: A system is only good if it is used.
Mobile devices are used in high numbers; and the question is one of how, why, and what. What makes the students want to use mobile devices, and how can this best be facilitated by organizations such as universities? This study and other related studies are attempting to address the new reality of ubiquitous computing devices at the university level. Yordanova (2007) states that wireless technologies have high acceptance among the younger generation. M-learning has the attributes of being both mobile and ubiquitous (Alexander, 2006; Yordanova, 2007). M-learning is an extension of e-learning and has been tested in the literature as a stand-alone educational platform, but more importantly, as add-on platform and channel for existing hybrid, face to face, and e-learning classes. Seong states that “mobile learning presumes the use of mobile Internet technology to facilitate the learning process” (Seong, 2006, p. 1). This presumption is founded on the rapid growth of wireless and mobile computing devices (Seong, 2006). Mobile devices are already being used by a majority of the students for other purposes; the literature illustrated that there were many studies regarding learning and mobile devices. Sharples (2007) stated there is a need to re-conceptualize learning for the mobile age, pointing out that there are many existing roles of mobility and communication in the learning process. Many of these changes are found within e-learning with its collaborative advantages and constructivist nature. This was shown in the review of the literature, as numerous applications of m-learning were provided. Therefore, a logical next step is to determine the effective ways to use these devices within the contemporary classroom, whether brick and mortar, e-learning, or a hybrid learning environment.There is a lack of empirical research concerning m-learning adoption factors. A continuing issue in information systems research is the identification and determination of the factors that are related to the cause and then acceptance of a technology (King & He, 2006). Shengquan, Xianmin, Gang, and Minjuan (2015) indicate that not much research has been conducted in the discipline of m-learning as this is fairly new and is just gaining acceptance as a research object within the literature. M-learning is supplemental and aids the student by providing ubiquitous access to both the online and hybrid classroom. Because these types of learning are collaboration intensive and constructivist in nature, the smartphone has been adopted by many for the purpose of extending as it is an ideally suited technology for expanding the classroom.
SEM Model Variables
Self-efficacy (SE) is the individual’s comfort level with using technology (Tweed, 2013). The facilitating conditions are the second component, which is the availability of resources needed to use the technology. Subjective norms (SN) are social pressures that make an individual perform a particular behavior (Ajzen, 1991). The individual’s social groups may have different opinions regarding the adoption of a technology. Prior use of e-learning (PRIORE) is the individual’s prior exposure and use of e-learning technologies. In this study, we are using a learning management system as the standard for including the individual in the study.
Perceived usefulness (PU) is the degree to which the individual believes that a technology would improve his or her job performance (Davis, 1989). Perceived ease of use (PEOU) is the degree to which an innovation is easy to understand (Rogers, 2003) or the degree to which the technology is free of effort (Davis, 1989). Innovations that are perceived to be less complex to use and have a higher possibility of adoption/acceptance by potential users
BIs are correlated with actual behavior. BIs are “the single best predictor of actual usage” (Venkatesh & Davis, 1996, p. 20). In addition, “the intention-behavior linkage is probably the most uncritically accepted assumption in social science research” (Bagozzi, 2007, p. 245). Ajzen (1991)found that an individual’s attitude toward a particular behavior is equivalent to that person’s overall assessment of performing the behavior.
Data Analysis
Many of the relevant variables in this study had been predetermined by the application of previous research models. Therefore, selection of relevant variables was predetermined for examination and possible inclusion or exclusion in the study based on their respective strength in the related studies.
The survey data were entered into Warp Partial Least Squares (PLS) 3.0. PLS is a second generation statistical technique for conducting Structural Equation Modeling (SEM)-based analysis. The utility of PLS is detailed elsewhere (Falk & Miller, 1992). With respect to technology acceptance, a number of recent studies utilized PLS (Al-Gahtani, 2001; Venkatesh, Morris, Davis, & Davis, 2003). PLS allows for the evaluation of psychometric properties of indicators used to measure a variable, and the estimation of direction and strength of the relationships among the model variables. PLS includes two sets of equations: the measurement model, or outer model, composed of equations representing the relationships between indicators and the variable they measure, and the structural model composed of equations representing the paths among the study’s variables. PLS calculates weights and loading factors for each item in relation to the construct. The weights, calculated by PLS, are used to calculate latent variable scores for the constructs, which reflect the contribution of each variable to its construct. Factor loading, as with other studies of this nature, were high (Cocosila & Archer, 2010), which are typical for TAM studies.
Model fit indices are provided by the software after running the PLS analysis. Three model fit indices and associated p values for the average path coefficient (APC) and average R-squared (ARS). Model fit indices are a useful set of measures related to model quality (Kock, 2011). P values for the APC and ARS should be less than .05 (Kock, 2011); this condition is met by both of the measures. Figure 1 below displays the research model with correlational coefficients and associated metrics.
The measurement, or structural model, consists of latent variables and relationships that represent the conceptual factors of interest. The path coefficients and p values are presented for the latent variables. Coefficients of determination indicate the strength and direction of the relationship between the latent variable pathways. The latent variables are the results of the loadings, which are the values from the specific questions in the study’s survey after treatment, including re-sampling by the Warp PLS 3.0 program. Path coefficient, or beta, is the equivalent of Spearman’s Rho correlational coefficient, which, in the structural model, is used to describe the strength of the linear relationships between the latent constructs. A corresponding measure that is shown with the beta is the significance level, which is the t statistic for that coefficient to its standard error. Shown on all but the outside latent variables or antecedent variables are the coefficients of determination or Rsquare, which is calculated by squaring the path coefficients. The result is used to determine the variance of the independent variable. SN in the research model are significantly related to BI through PEOU but not through PU. The identification of this relationship is going to be dependent on many factors, such as causal relations.
Results
Attitude (ATT) was left in the research model even though some TAM studies have removed this construct (Heerink, Krose, Evers, & Wielinga, 2009; Holden & Karsh, 2010). Attitude is quite significant in the model, as can be seen by observing the research model with path coefficients. Specific to the key external factor of interest in the study, we can observe that the stronger significant relationship is through PEOU, rather than directly to BI. The effect of ATT is definitely a strong determinant of BI and one that effectively mediates many antecedent variables in the TAM model. This research model is using ATT as an antecedent to BI. Djamasbi, Siegel, Tullis, and Dai (2010) found that ATT was an important factor and antecedent. Also, the use of affect as a variable, which is the user’s global feelings, moods, and emotions (Djamasbi et al., 2010), was significant and positively related to attitude. It was also found that affect can also negatively influence attitude, although this point is beyond the scope of this research. What is not beyond the scope of this research is the strong effect of attitude on BI and the effectiveness of attitude as a relevant and critical mediator in this model and others. Other studies have used Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) to establish this when just the latent variables were examined purely on correlation of the latent variables, with no user intention model. Attitude’s inclusion as an antecedent is based on the strong support in the literature for Attitude Toward Behavior (ATB), which has been highly correlated with user intention. ATB is defined as “an individual’s positive or negative feelings (evaluative affect) about performing the target behavior” (Fishbein & Ajzen, 1975). M-learning involves objects, but it is a behavior; therefore, attitude is going to be a more relevant variable and antecedent to BIs. Zhang, Aikman, and Sun (2008)tested attitude’s predictive capability on intentions and usage of information communications technology (ICT) and devices and found that Attitude Toward Objects (ATO) and ATB had significant predictive capabilities for initial use and continued use. Zhang et al. (2008) noted that we should not make assumptions on attitude regarding related technologies, as attitudes change as a user’s ICT use increases. Therefore, attitude is regarded as a highly relevant antecedent to BIs and a critical part of answering the research questions in this study.
Convergent and Discriminant Validities
According to Geffen and Straub, convergent validity is demonstrated when a measurement item loads with a significant t value, with a corresponding p value at less than .05 (Geffen & Straub, 2005). Reliability tests for the reflective nature of the model are exhibited by the high Cronbach alpha scores. Composite reliability is an indicator of how well constructs in the measurement model are described by indicators. W. H. Chin (1998) states that the recommended threshold is .7; therefore, values above this number imply that constructs are well described by indicators. Convergent validity is demonstrated in this study’s data by examination of the models’ loads and cross loadings, which should all be in the −1 to 1 range. Kock (2012) states that the two criteria recommended as the basis for concluding that a measurement model demonstrates convergent validity are that p values associated with the loadings be lower than .05, and the loadings be equal to or greater than .5. The study’s analysis results demonstrated convergent validity.
Reliability Tests
A Cronbach’s alpha value of at least .7 is commonly seen as acceptable (Churchill & Brown, 2006). Individual construct reliability tests need reported values above .7 to suggest that all constructs could be considered reliable (see Table 1 to review these values). Testing using Cronbach’s alpha values shows that the data exhibit high levels of reliability (Adams, Nelson, & Todd, 1992).
The Average Variances Extracted (AVE) are used to assess discriminant validity and convergent validity. Average variances that demonstrate acceptable validity should be 0.5 or greater (Fornell & Larcker, 1981), and all of the latent variables were at, or exceeded, this value (see Table 2).
Full collinearity Variance Inflation Factors (VIFs) aid in determining collinearity. There are multiple accepted tests for multicollinearity in the literature. Hair, Anderson, Tatham, and Black (1998)stated that VIFs should be lower than 10. VIFs in the study ranged from 1.496 to 5.722 for all of the latent variables. The Warp PLS 3.0 program calculates VIFs on a full collinearity test enabling vertical and lateral collinearity (Kock, 2012). In Table 3, the full collinearity VIFs are displayed, while in Table 4, the research model metrics are displayed.
Geffen and Straub (2005) defined what measurements are needed for factorial validity in PLS analysis studies. Many of these measurements have been adopted in the research that uses PLS in technology adoption and user intention studies. This study has used some of the indicators identified in this research document, and has demonstrated that PLS can be proven to possess factorial validity. Factorial validity is the establishment of validity for latent constructs or latent variables (Geffen & Straub, 2005) and are “research abstractions that cannot be measured directly; variables such as beliefs and perceptions” (Geffen & Straub, 2005, p. 91). The primary variables of interest are beliefs and perceptions and are not a directly measured variable, such as age and gender.
Reliability and Internal Consistency
Loadings for all of the latent variables, which are the correlational coefficients between the indicator variables or questions from the survey and the latent variables, were within acceptable ranges. W. W. Chin and Gopal (1995) suggest that the suggested threshold value for loadings (coefficients) is .8. The AVE is a measure of internal consistency in the model. All measures were above the .5 value, which is commonly used as a threshold (W. H. Chin, 1998). As shown, all scores were well above the accepted threshold. As shown in Table 1, PLS Factorial Validity Measurements, all of the AVE estimates are well above the .5 value and will therefore be accepted (Dillon & Goldstein, 1984). Composite reliability is an indicator of how well each of the constructs is described by the indicators in the measurement model. All indicators for the model of the latent variables of PU, PEOU, ATT, and BI demonstrate high scores and will therefore be used in the analysis. Indicators were judged according to the .7 threshold (W. W. Chin & Gopal, 1995). Communality is a measurement of the squared correlation between an indicator and its latent construct (W. H. Chin, 1998). It is a measurement of capacity for describing the related latent constructs that meet the established threshold for communality of .5 (W. H. Chin, 1998). Table 5 displays the effect sizes for path coefficients.
Warp PLS 3.0 provides path coefficients and effect sizes after the analysis. The effect sizes areCohen’s (1988) f-squared coefficients (Kock, 2012). Standard errors and effect sizes are presented in the same manner that the path coefficients are. This makes visualization easier as they are in the same order. Effect sizes are the most relevant to this analysis and discussion as they offer insight into the individual contributions of the predictor latent variables to the R-square coefficients of the criterion latent variable of each latent variable (Kock, 2012). Effect sizes aid in determining whether the effects indicated by path coefficients are small, medium, or large (Kock, 2012). Recommended values are 0.02, 0.15, and 0.35, respectively (Cohen, 1988). Therefore, all significant relationships identified by the correlation coefficients were determined to have adequate effects for consideration and inclusion in the analysis. Non-significant values were seen as lacking effect values that would indicate a smaller or greater effect.
Discussion
Research hypotheses represent if/then logic statements (Creswell, 2008). This study used demographic or exogenous variables, independent or endogenous variables, and dependent variables. The TAM was introduced early in the discussion, as it is a “rationale for the connections among the variables” (Creswell, 2008). There should be a positive relationship between the PRIORE and the BIs to use m-learning. The antecedent variables will also have an effect on the students’ BIs to use m-learning. This study uses hypotheses set forth by TAM (Davis, 1989) and those of the antecedent variables that are used to extend the research model.
- Hypothesis 1: The PU of m-learning will have a positive effect on the users’ BIs to adopt m-learning as mediated through attitude. PU has a significant positive relationship with BIs, β = .46 at (p < .01). PU is also positively correlated with attitude (ATT), β = .36 at (p < .01). The hypothesis is accepted.
- Hypothesis 2: PEOU of m-learning will have a positive effect on the users’ BIs to adopt m-learning as mediated through attitude. PEOU is mediated by attitude and also PU in the research model and subsequently in the PLS research analysis model. PEOU is “the degree to which a person believes that using a particular system would be free of effort” (Davis, 1989, p. 320). PEOU is significantly and positively related to PU, β = .84 at (p < .01) and also to ATT, β = .54 at (p < .01). With such strong positive correlations to PU and ATT, which is the direct antecedent to BI, the hypothesis is accepted.
- Hypothesis 3: SN will have a positive effect on the users’ BIs to adopt m-learning as mediated through PU and PEOU as mediated through attitude. SNs have a positive relationship with PEOU, β = .22 at (p < .01); its relationship to PU is not supported, β = .10 at (p = .43). Therefore, the hypothesis is accepted, as there is a significant relationship with PEOU. Normative behavior is represented by SNs, and this is expressed as the individuals’ perceived view of referent others and the individual may approve of m-learning use if others view this as a positive activity for the individual. However, they can refuse or reject the innovation based on the opinions of others as well. This is also contingent on the relationship between normative behavior and attitude. External factors would include reference groups, demographics, and the individual’s personality.
- Hypothesis 4: SE will have a positive effect on the users’ BIs to adopt m-learning though PU and PEOU as mediated by attitude. SE has a significant positive relationship with PEOU, β = .41 at (p < .01). The second path from SE to PU is not supported, β = .00 at (p = .47). The hypothesis is accepted as SE is significantly correlated with BI through the PEOU→ATT→BI pathway. SE is the person’s judgment on his or her capability to perform the task. SSE is strongly influenced by a person’s motivation, perseverance, and effort to perform a task (Wood & Bandura, 1989). Therefore, if this is true, it would stand to reason that the prior use of a related technology would be related to SE. A direct connection was made in Warp PLS 3.0, and the model was re-run for the purposes of answering this question related to this hypothesis (seeFigure 2).
As can be seen, there is a strong relationship between PRIORE and SE. Therefore, it is possible to further explain SE and PRIORE. SE was not significantly correlated with PU.
- Hypothesis 5: PRIORE will have a positive effect on the users’ BIs to adopt m-learning directly, and as mediated through PEOU and PU to ATT. PRIORE has a positive and significant relationship with BI, β = .17 at (p < .01). PEOU also has a significant positive relationship with PEOU, β = .29 at (p < .01). The third pathway, which begins with PU, was not supported, b = −.02 at (p = .40). Two of the three pathways, including the direct first order path to BI, had a significant and positive correlation; therefore, the hypothesis is supported. Prior use of a technology or related technology has been correlated with intention to use and actual use in numerous information system studies.
Findings
The study found that there is a relationship between PRIORE and BI to use m-learning. Similar questions have been asked of other types of learning, and of previous uses of m-learning as a predictor of m-learning (Akour, 2010). Similar research questions have been examined in the literature regarding e-learning, but not as it relates to the BI to use m-learning. There is a pattern in the e-learning research that follows a similar path. Haverila (2011) found that prior e-learning experience influenced or affected perceived learning outcomes in an undergraduate environment. Therefore, this study and others helped establish new questions to be answered in future research. This study represents an addition to the body of knowledge for the BI to use m-learning.
Implications
Prior experience with e-learning had a significant and positive effect on PU and BI to use m-learning. This is additive to the work of Akour (2010), who found that previous use of e-learning had a significant and positive effect on the users’ BI to use m-learning. Akour’s final research model uses attitude as an antecedent and moderator to BI. Researchers and practitioners should be aware of this strong connection as it may be critical to designing m-learning programs at community colleges or universities. Researchers should see that experience in the m-learning use intention models plays a key role and may explain variance in their models.
Future Research
The addition of time and financial resources could have made a more comprehensive study and one that would have benefited from a mixed method and a longitudinal approach that may have included actual usage. Actual usage could be measured by launching the programs to be used in m-learning tasks from a special group of programs within a menu of programs. By categorizing the programs, it would be possible to track them easier and gather meaningful usage statistics. In addition, it would be interesting to gather more demographic information in a more homogeneous group to gather a more in-depth picture of the users and additional factors that may have an effect on the BI to use m-learning.
As has been discussed, there has been massive and exponential growth in the use of mobile wireless computing platforms. This study has documented this growth and some of the use that is seen in the contemporary university and community college. Many studies have examined and are examining the potential uses for these technologies and how and where they are most effective. M-learning has been driven from disruptive innovation generated by advances in mobile computing and wireless communication technology. How this is used in the university and what are the factors that influence intentions to use m-learning were some of the key questions that were examined.
Conclusion
Development of a working definition of m-learning was simplified and refined to the use of a smartphone or other mobile computing device that is connected to the Internet that can be used for ad hoc tasks to aid the student, which includes logging on to a learning management system that either has or has not been optimized for mobile users. The definition of m-learning was left open, as the focus of the study was to see the intentions toward m-learning and how the past e-learning experiences may or may not contribute to the user’s intention to use m-learning. It was found that there was a significant correlation between previous e-learning and the intention to use m-learning. In addition to addressing the research questions of the study through analysis of the hypotheses, it was learned that PU and PEOU played a large role in determining the BI to use m-learning among students. Whether the research model used attitude or not, the results normally explain a large part of the variance. It is also possible, as seen in the literature and demonstrated in this study, to decompose constructs, by adding latent variables that aid in explaining the variance in the research model. Previous learning experiences within user intention studies may be highly relevant for inclusion and study, as this study demonstrated. As experiences change, new experiences and their potential effects on BI should be examined as they may aid in explaining intentions to use.
Article Notes
- Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Funding The author(s) received no financial support for the research and/or authorship of this article.
- © The Author(s) 2015
This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
Author Biographies
Jonathan Abramson holds a master’s degree in organizational management and a doctoral degree in computer science. He has worked in a variety of technology and management positions in the public and private sector. In addition, he started and ran a systems integration and database analysis and programming business for 8 years. He is currently the academic program manager at Post University in Computer Information Systems, in Waterbury, Connecticut.
Maurice Dawson serves as an assistant professor of information systems at the University of Missouri–St. Louis, former assistant professor (honorary) of industrial and systems engineering at the University of Tennessee Space Institute, and Fulbright Scholar Specialist. Dawson is recognized as an information assurance system architect and engineer by the U.S. Department of Defense. Research focus area is cyber security, systems security engineering, open source software, mobile security, and engineering management.
Jeffery Stevens holds two master’s degrees, one in human resources (HR) and the other in general management. He has more than 16 years in the areas of HR, management, and process engineering within the private sector. Currently, he is the president and CEO of an international consulting company that aids small and mid-size companies in growth and process refinement. Within the academic realm, he has conducted research and published several articles within the areas of HR, statistics, education, research methodology, and homeland security as well as ground breaking research within the area of virtual education. He has taught a wide array of courses in both the campus and online settings and is now working with the American Council on Education and the Higher Learning Commission in accrediting academic programs.
References
- ↵
- Adams D.,
- Nelson R.,
- Todd P.
- ↵
- Ajzen I.
- ↵
- Akour H.
- ↵
- Alexander B.
- ↵
- Al-Gahtani S.
- ↵
- Bagozzi R. P.
- ↵
- Brownlow M.
- ↵
- Chin W. H.
- ↵
- Chin W. W.,
- Gopal A.
- ↵
- Churchill G. A.,
- Brown T. J.
- ↵
- Cocosila M.,
- Archer N.
- ↵
- Cohen J.
- ↵
- Creswell J. W.
- ↵
- Curtis S.
- ↵
- Davis F.
- ↵
- Dillon W. R.,
- Goldstein M.
- ↵
- Djamasbi S.,
- Siegel M.,
- Tullis T.,
- Dai R.
- ↵
- Entner R.
- ↵
- Falk R.,
- Miller N.
- ↵
- Fishbein M.,
- Ajzen I.
- ↵
- Fornell C.,
- Larcker D. F.
- ↵
- Geffen D.,
- Straub D. W.
- ↵
- Hair J.,
- Anderson J.,
- Tatham R.,
- Black W.
- ↵
- Haverila M.
- ↵
- Heerink M.,
- Krose B.,
- Evers V.,
- Wielinga B.
- ↵
- Holden R. J.,
- Karsh B. T.
- ↵
- Hoppe H.,
- Joiner R.,
- Milrad M.,
- Sharples M.
- ↵
- King W. R.,
- He J.
- ↵
- Kock N.
- ↵
- Kock N.
- ↵
- Liu Y.,
- Han S.,
- Li H.
- ↵
- Lu X.,
- Viehland D.
- ↵
- Massey A.,
- Ramesh V.,
- Khatri V.
- ↵
- Rogers E. M.
- ↵
- Seong D. S. K.
- ↵
- Sharples M.
- ↵
- Shengquan Y.,
- Xianmin Y.,
- Gang C.,
- Minjuan W.
- ↵
- Tweed S. R.
- ↵
- Venkatesh V.,
- Davis F. D.
- ↵
- Venkatesh V.,
- Morris M.,
- Davis G.,
- Davis F.
- ↵
- Walker J.,
- Jorn L.
- ↵
- Weintraub S.
- ↵
- Wood R.,
- Bandura A.
- ↵
- Yordanova K.
- ↵
- Zhang P.,
- Aikman S.,
- Sun H.
Subscribe to:
Posts (Atom)