WWW.DISSERTATION.XLIBX.INFO
FREE ELECTRONIC LIBRARY - Dissertations, online materials
 
<< HOME
CONTACTS



Pages:     | 1 |   ...   | 24 | 25 ||

«On Search Engine Evaluation Metrics Inaugural-Dissertation zur Erlangung des Doktorgrades der Philosophie (Dr. Phil.) durch die Philosophische ...»

-- [ Page 26 ] --

Dunlop, M. D. (1997). Time, Relevance and Interaction Modelling for Information Retrieval. SIGIR Forum 31(SI): 206-213.

Dupret, G. and C. Liao (2010). A Model to Estimate Intrinsic Document Relevance from the Clickthrough Logs of a Web Search Engine. Proceedings of the Third ACM International Conference on Web Search and Data Mining. New York, New York, USA, ACM: 181-190.

Efthimiadis, E. N. (2008). How Do Greeks Search the Web?: A Query Log Analysis Study. Proceeding of the 2nd ACM Workshop on Improving Non English Web Searching. Napa Valley, California, USA, ACM: 81-84.

Electronic Frontier Foundation (2006). Request for Investigation and Complaint for Injunctive Relief.

Retrieved 31.10.2011.

Encyclopædia Britannica (2011). marginal utility. Encyclopædia Britannica.

Feldman, S. (2000). Meaning-based Search Tools: Find What I Mean, Not What I Say. Online 24(3):

49-55.

Fox, S., K. Karnawat, M. Mydland, S. Dumais and T. White (2005). Evaluating Implicit Measures to Improve Web Search. ACM Transactions on Information Systems 23(2): 147-168.

Gao, J., Q. Wu, C. Burges, K. Svore, Y. Su, N. Khan, S. Shah and H. Zhou (2009). Model Adaptation via Model Interpolation and Boosting for Web Search Ranking. Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Singapore, Association for Computational Linguistics. 2: 505-513.

Google (2010). Corporate Information - Technology Overview. Retrieved 20.06.2010, from http://www.google.com/corporate/tech.html.

- 183 Google (2011a). Autocompleter - Mountain View - US jobs - Google. Retrieved 01.04.2011, from http://www.google.com/intl/en/jobs/uslocations/mountain-view/autocompleter/.

Google (2011b). Being a Google Autocompleter. Retrieved from http://www.youtube.com/watch?v=blB_X38YSxQ on 01.04.2011.

Google (2011c). How Google makes improvements to its search algorithm. Retrieved from http://www.youtube.com/watch?v=J5RZOU6vK4Q on 28.08.2011.

Google (2011d). Inside Google's Search Office. Retrieved from http://www.youtube.com/watch?v=pt6qj5-5kVA on 13.08.2011.

Granka, L. A., T. Joachims and G. Gay (2004). Eye-tracking Analysis of User Behavior in WWW Search.

Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Sheffield, United Kingdom, ACM: 478-479.

Harter, S. P. (1992). Psychological Relevance and Information Science. Journal of the American Society for Information Science 43(9): 602-615.

Hassan, A., R. Jones and K. L. Klinkner (2010). Beyond DCG: User Behavior as a Predictor of a Successful Search. Proceedings of the Third ACM International Conference on Web Search and Data Mining. New York, New York, USA, ACM: 221-230.

Hersh, W. and P. Over (2000). TREC-8 Interactive Track Report. Proceedings of the 8th Text REtrieval Conference, Gaithersburg, MD: 57-64.

Hersh, W., A. Turpin, S. Price, B. Chan, D. Kramer, L. Sacherek and D. Olson (2000). Do Batch and User Evaluations Give the Same Results? Proceedings of the 23rd Annual International ACM SIGIR

Conference on Research and Development in Information Retrieval. Athens, Greece, ACM:

17-24.

Hitwise (2009a). Hitwise Data Center UK. Retrieved 23.10.2009, from http://www.hitwise.com/uk/datacentre/main/dashboard-7323.html.

Hitwise (2009b). Hitwise Data Center US. Retrieved 23.10.2009, from http://www.hitwise.com/us/datacenter/main/dashboard-10133.html.

Hitwise (2010). Top Search Engine - Volume for the 4 weeks ending 10/02/2010. Retrieved 2010-10from http://www.hitwise.com/us/datacenter/main/dashboard-10133.html.

Höchstötter, N. and D. Lewandowski (2009). What Users See – Structures in Search Engine Results Pages. Information Sciences 179(12): 1796-1812.

Horowitz, A., D. Jacobson, T. McNichol and O. Thomas (2007). 101 Dumbest Moments in Business.

Retrieved 31.10.2011.

Hotchkiss, G., S. Alston and G. Edwards (2005). Google Eye Tracking Report, Enquiro.

Huffman, S. B. and M. Hochster (2007). How Well Does Result Relevance Predict Session Satisfaction?

Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands, ACM: 567-574.

Interbrand (2010). Best Global Brands. Retrieved 2010-10-05, from http://www.interbrand.com/de/knowledge/best-global-brands/best-global-brandsbest-global-brands-2010.aspx.

Jansen, B. J., D. L. Booth and A. Spink (2007). Determining the User Intent of Web Search Engine Queries. Proceedings of the 16th International Conference on World Wide Web. Banff, Alberta, Canada, ACM: 1149-1150.

- 184 Jansen, B. J. and A. Spink (2006). How Are We Searching the World Wide Web?: A Comparison of Nine Search Engine Transaction Logs. Information Processing & Management 42(1): 248-263.

Jansen, B. J., A. Spink, J. Bateman and T. Saracevic (1998). Real Life Information Retrieval: A Study of User Queries on the Web. SIGIR Forum 32(1): 5-17.

Jansen, B. J., M. Zhang and Y. Zhang (2007). The Effect of Brand Awareness on the Evaluation of Search Engine Results. CHI '07 Extended Abstracts on Human Factors in Computing Systems.

San Jose, CA, USA, ACM: 2471-2476.

Järvelin, K. and J. Kekäläinen (2000). IR Evaluation Methods for Retrieving Highly Relevant Documents. Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Athens, Greece, ACM: 41-48.





Järvelin, K. and J. Kekäläinen (2002). Cumulated Gain-based Evaluation of IR Techniques. ACM Transactions on Information Systems 20(4): 422-446.

Joachims, T. (2002). Optimizing Search Engines Using Clickthrough Data. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data MSining.

Edmonton, Alberta, Canada, ACM: 133-142.

Joachims, T., L. Granka, B. Pan, H. Hembrooke and G. Gay (2005). Accurately Interpreting Clickthrough Data as Implicit Feedback. Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Salvador, Brazil, ACM: 154-161.

Keane, M. T., M. O'Brien and B. Smyth (2008). Are People Biased in their Use of Search Engines?

Communications of the ACM 51(2): 49-52.

Kekäläinen, J. (2005). Binary and Graded Relevance in IR Evaluations - Comparison of the Effects on Ranking of IR Systems. Information Processing & Management 41(5): 1019-1033.

Kelly, D., X. Fu and C. Shah (2007). Effects of Rank and Precision of Search Results on Users' Evaluations of System Performance, University of North Carolina.

Kelly, D. and J. Teevan (2003). Implicit Feedback for Inferring User Preference: A Bibliography. SIGIR Forum 37(2): 18-28.

Kendall, M. G. and B. B. Smith (1938). Randomness and Random Sampling Numbers. Journal of the Royal Statistical Society 101(1): 147-166.

Kishida, K., K.-h. Chen, S. Lee, K. Kuriyama, N. Kando, H.-H. Chen, S. H. Myaeng and K. Eguchi (2004).

Overview of CLIR Task at the Fourth NTCIR Workshop. NTCIR Workshop Meeting on Evaluation of Information Access Technologies Information Retrieval Question Answering and CrossLingual Information Access, Tokyo: 1-59.

Kleinberg, J. M. (1999). Authoritative Sources in a Hyperlinked Environment. Journal of the ACM 46(5): 604-632.

Klöckner, K., N. Wirschum and A. Jameson (2004). Depth- and Breadth-first Processing of Search Result Lists. Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems. Vienna, Austria, ACM: 1539.

Korfhage, R. R. (1997). Information Ыtorage and Кetrieval. New York; Chichester, Wiley.

Koster, M. (1994). ALIWEB - Archie-like Шndexing in the WEB. Computer Networks and ISDN Systems 27(2): 175-182.

Lamacchia, B. A. (1996). Internet Fish, Massachusetts Institute of Technology.

- 185 Lancaster, F. W. (1979). Information Retrieval Systems: Characteristics, Testing and Evaluation. New York, John Wiley & Sons.

Lewandowski, D. (2001). "Find what I mean not what I say": Neuere Ansätze zur Qualifizierung von Suchmaschinen-Ergebnissen. BuB - Forum für Bibliothek und Information 53(6/7): 381 - 386.

Lewandowski, D. (2008). The retrieval Effectiveness of Web Search Engines Considering Results Descriptions. Journal of Documentation 64(6): 915-937.

Lewandowski, D. and N. Höchstötter (2007). Web Searching: A Quality Measurement Perspective.

Web Searching: Interdisciplinary Perspectives(ed.). Dordrecht, Springer: 309-340.

Linde, F. and W. G. Stock (2011). Information Markets: A Strategic Guideline for the I-Commerce.

Berlin, New York, De Gruyter Saur.

LiveInternet (2010). LiveInternet Search Statistics. Retrieved 09.06.2010, from http://www.liveinternet.ru/stat/ru/searches.html.

Macdonald, C., R. L. T. Santos, I. Ounis and I. Soboroff (2010). Blog Track Research at TREC. SIGIR Forum 44(1): 58-75.

Mandl, T. (2008). Recent Developments in the Evaluation of Information Retrieval Systems: Moving Towards Diversity and Practical Relevance. Informatica 32(1): 27-38.

McGee, M. (2010). Yahoo Closes European Directories, Says US Directory Is Safe. Retrieved 2010-10from http://searchengineland.com/yahoo-closes-european-directories-us-directory-safeMizzaro, S. (1998). How Many Relevances in Information Retrieval? Interacting With Computers 10(3): 305-322.

Mizzaro, S. (2001). A New Measure of Retrieval Effectiveness (Or: What’s Wrong with Precision and Recall). International Workshop on Information Retrieval (IR’2001): 43–52.

Munroe, R. (2008). Regrets. Retrieved 19.01.2011.

NetApplications (2009). Top Search Engine Share Trend. Retrieved 23.10.2009, from http://marketshare.hitslink.com/search-engine-market-share.aspx?qprid=5.

Nielsen (2010). Top 10 U.S. Search Providers, Home & Work - August 2010. Retrieved 2010-10-05, from http://en-us.nielsen.com/content/nielsen/en_us/insights/rankings/internet.html.

Open Directory Project (2010a). About the Open Directory Project. Retrieved 2010-20-05, from http://www.dmoz.org/docs/en/about.html.

Open Directory Project (2010b). Open Directory Project. Retrieved 2010-20-05, from http://www.dmoz.org/.

Oxford Dictionaries (2010). "google". Ask Oxford, Oxford University Press.

Pollock, S. M. (1968). Measures for the Comparison of Information Retrieval Systems. American Documentation 19(4): 387-397.

Prensky, M. (2001). Digital Natives, Digital Immigrants. On The Horizon 9(5): 1-6.

Radlinski, F. and T. Joachims (2006). Minimally Invasive Randomization for Collecting Unbiased Preferences from Clickthrough Logs. Proceedings of the 21st National Conference on Artificial Intelligence. Boston, Massachusetts, AAAI Press. 2: 1406-1412.

Radlinski, F., M. Kurup and T. Joachims (2008). How Does Clickthrough Data Reflect Retrieval Quality?

Proceeding of the 17th ACM Conference on Information and Knowledge Management. Napa Valley, California, USA, ACM: 43-52.

- 186 Raskovalov, D. (2010). «Обнинск» — новое ранжирование для гео-независимых запросов в России. Retrieved 23.09.2010, from http://clubs.ya.ru/search/replies.xml?item_no=1769.

Rose, D. E. and D. Levinson (2004). Understanding User Goals in Web Search. Proceedings of the 13th international conference on World Wide Web. New York, NY, USA, ACM: 13-19.

Sakai, T. (2007a). Alternatives to Bpref. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands, ACM: 71-78.

Sakai, T. (2007b). On the Reliability of Information Retrieval Metrics Based on Graded Relevance.

Information Processing & Management 43(2): 531-548.

Salton, G. and M. J. McGill (1986). Introduction to Modern Information Retrieval. New York, McGrawHill.

Salton, G., A. Wong and C. S. Yang (1975). A Vector Space Model for Automatic Indexing.

Communications of the ACM 18(11): 613-620.

Saracevic, T. (2008). Effects of Inconsistent Relevance Judgments on Information Retrieval Test Results: A Historical Perspective. Library Trends 56(4): 763-783.

Scholer, F., M. Shokouhi, B. Billerbeck and A. Turpin (2008). Using Clicks as Implicit Judgments:

Expectations Versus Observations. Proceedings of the 30th European Conference on Advances in Information Retrieval Glasgow, UK. Berlin, Heidelberg, Springer: 28-39.

Search Engine Watch (2002). The Search Engine "Perfect Page" Test. Retrieved 2010-10-05, from http://searchenginewatch.com/2161121.

Segalovich, I. (2010). Machine Learning In Search Quality At Yandex. 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Geneva.

Sieg, A., B. Mobasher and R. Burke (2007). Web Search Personalization with Ontological User Profiles.

Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management.

Lisbon, Portugal, ACM: 525-534.

Sirotkin, P. (2011). Predicting User Preferences. Information und Wissen: global, sozial und frei?

Proceedings des 12. Internationalen Symposiums für Informationswissenschaft, Hildesheim.

Boizenburg, VWH: 24-35.

Smith, C. L. and P. B. Kantor (2008). User Adaptation: Good Results from Poor Systems. Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Singapore, Singapore, ACM: 147-154.

Speretta, M. and S. Gauch (2005). Personalized Search Based on User Search Histories. Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence, Washington, DC, IEEE Computer Society: 622-628.

Spink, A. (2002). A User-centered Approach to Evaluating Human Interaction with Web Search Engines: An Exploratory Study. Information Processing and Management 38(3): 401-426.

Spink, A. and B. J. Jansen (2004). Web Search: Public Searching on the Web. Dordrecht; London, Kluwer Academic Publishers.

Spink, A. and H. C. Ozmultu (2002). Characteristics of Question Format Web Queries: An Exploratory Study. Information Processing and Management 38(4): 453-471.

Stiftung Warentest (2003). Noch liegt Google vorn. test, Stiftung Warentest. 02: 4.

Stock, W. G. (2007). Information Retrieval: Informationen suchen und finden. München, Oldenbourg.

- 187 Su, L. T. (2003). A Comprehensive and Systematic Model of User Evaluation of Web Search Engines: I.

Theory and Background. Journal of the American Society for Information Science and Technology 54(13): 1175-1192.

Tague-Sutcliffe, J. (1992). The Pragmatics of Information Retrieval Experimentation, Revisited.

Information Processing and Management 28(4): 467-490.

Tague, J. (1981). The Pragmatics of Information Retrieval Experimentation. Information Retrieval Experiment. K. Sparck-Jones (ed.). London, Butterworth: 59-102.

Tang, R., W. M. Shaw and J. L. Vevea (1999). Towards the Identification of the Optimal Number of Relevance Categories. Journal of the American Society for Information Science 50(3): 254Taylor, R. S. (1967). Question-negotiation and Information-seeking in Libraries. College and Research Libraries (29): 178-194.

Teevan, J., E. Cutrell, D. Fisher, S. M. Drucker, G. Ramos, P. André and C. Hu (2009). Visual Snippets:

Summarizing Web Pages for Search and Revisitation. Proceedings of the 27th International Conference on Human Factors in Computing Systems. Boston, MA, USA, ACM: 2023-2032.

Teevan, J., S. T. Dumais and E. Horvitz (2005). Personalizing Search via Automated Analysis of Interests and Activities. Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Salvador, Brazil, ACM: 449-456.

Thomas, P. and D. Hawking (2006). Evaluation by Comparing Result Sets in Context. Proceedings of the 15th ACM International Conference on Information and Knowledge Management, Arlington, Virginia, ACM: 94-101.

Turpin, A. and F. Scholer (2006). User Performance versus Precision Measures for Simple Search Tasks. Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Seattle, Washington, ACM: 11-18.

Turpin, A., F. Scholer, K. Jarvelin, M. Wu and J. S. Culpepper (2009). Including Summaries in System Evaluation. Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Boston, MA, ACM: 508-515.

Turpin, A., Y. Tsegay, D. Hawking and H. E. Williams (2007). Fast Generation of Result Snippets in Web Search. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Amsterdam, The Netherlands, ACM: 127-134.

Turpin, A. H. and W. Hersh (2001). Why Batch and User Evaluations do not Give the same Results.

Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New Orleans, Louisiana, ACM: 225-231.

Van Rijsbergen, C. J. (1979). Information Retrieval. London; Boston, Butterworths.

Varian, H. R. (1999). Economics and Search. SIGIR Forum 33(1): 1-5.

Vaughan, L. (2004). New Measurements for Search Engine Evaluation Proposed and Tested.

Information Processing and Management 40(4): 677-691.

Voorhees, E. M. (1999). The TREC-8 Question Answering Track Report. Proceedings of The Eighth Text REtrieval Conference (TREC 8), Gaithersburg, MA, NIST: 77-82.

Voorhees, E. M. and D. Harman (2000). Overview of the Sixth Text REtrieval Conference (TREC-6).

Information Processing and Management 36(1): 3-35.

WebHits (2009). Web-Barometer. Retrieved 23.10.2009, from http://www.webhits.de/deutsch/index.shtml?webstats.html.

- 188 White, R. W. and D. Morris (2007). Investigating the Querying and Browsing Behavior of Advanced Search Engine Users. Proceedings of the 30th Annual International ACM SIGIR Conference on

Research and Development in Information Retrieval. Amsterdam, The Netherlands, ACM:

255-262.

Yandex (2008). Web Search: The What and the How of Searching Online. Moscow, Yandex.

Zhou, B. and Y. Yao (2008). Evaluating Information Retrieval System Performance Based on Multigrade Relevance. Proceedings of the 17th International Conference on Foundations of Intelligent Systems, Toronto, Canada, Springer: 424-433.

–  –  –

ERR formula. For each rank r, the probabilities of a relevant result at each earlier rank i are multiplied; the inverse probability is used as a damping factor for the gain at the current rank.

ERR is a relatively new and user-centric metric. The importance of each later result is assumed to be negatively correlated with the usefulness of previous results; that is, the better earlier results, the less important the later ones.

Theory: Section 4.3, page 33.

Study: Section 10.4, page 104.

In the evaluation, ERR showed a relatively stable discriminative performance, albeit on a below-average level. It performed best with a discount by square rank, though the differences between different discount functions were small.

ESL – Expected Search Length ∑ Expected Search Length adapted for graded relevance and discount. r is the rank at which the sum of single result relevance scores reaches a threshold n. The relevance scores are obtained by dividing the user relevance score for rank i by a discount disc(i) dependant on the same rank. c is the cut-off value used.

ESL scores depend on the number of results a user has to examine before satisfying his information need; this is to be modelled as seeing a set minimal number of relevant results.

Because of non-binary relevance used in much of the present study, the satisfaction of the information need has been instead regarded to occur by having seen results with a set cumulative relevance.

Theory: Section 4.3, page 32.

Study: Section 10.4, page 108.

In the evaluation, ESL performed quite well. The best parameters for a six-point relevance evaluation were a cumulative relevance threshold n of around 2.5 with a high-discount function (e.g. a rank-based discount).

–  –  –

Modified MAP formula with queries Q, relevant documents R and documents D (at rank r). rel is a relevance function assigning 1 to relevant and 0 to non-relevant results, or, in this study, one of six values in the range from 1 to 0. disc(r) is a discount function depending on the rank r.

As its name suggests, MAP calculates average precision scores (at each relevant results), and then calculates the means of those over multiple queries. Usually, later addends are discounted by rank (disc(r)=r).

Theory: Section 4.1, page 25.

Study: Section 10.3.

An important (and relatively stable) result is that the routinely used discount by rank produces results far below other metrics like (N)DCG, or even MAP without any discount at all, which performs quite well.

MRR – Mean Reciprocal Rank

–  –  –

∑ || Mean Reciprocal Rank (MRR), Q being the set of queries and RRq the Reciprocal Rank measured for query q.

MRR is a straightforward metric which depends solely on the rank of the first relevant result.

Theory: Section 4.2, page 27.

Study: Section 10.4, page 107.

In the evaluation, MRR performed significantly worse than most other metrics.

–  –  –

{ DCG with logarithm base b. CGr is the Cumulated Gain at rank r, and rel(r) a a relevance function assigning 1 to relevant and 0 to non-relevant results. disc(r) is a discount function depending on the rank r.

Normalized DCG is calculated by dividing DCG at rank r by the DCG of an ideal result ranking iDCG at the same rank.

A metric derived from Discounted Cumulative Gain (DCG). It adds up the relevance scores of all documents up to a cut-off value, with each later result discounted by log2 of its rank. Then, it is normalized by dividing it by the DCG of an idealized result list (iDCG) constructed from the pool of all available results.

Theory: Section 4.2, page 29.

Study: Section 10.1.

Results: NDCG is found to perform well in most circumstances, with the usually employed log2 discount.

Precision One of the oldest and simplest metrics, Precision is just the share of relevant results in the retrieved list. For reasons described in the study, it is very similar to (N)DCG without a discount for later ranks.

Theory: Section 4.1, page 24.

Study: Section 10.2.

Mostly, Precision performed acceptably; the most startling feature was a strong decline in discriminative power at later cut-off ranks.

Pages:     | 1 |   ...   | 24 | 25 ||


Similar works:

«Group Information Behavioural Norms and the Effective Use of a Collaborative Information System: A Case Study by Colin David Furness A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Faculty of Information University of Toronto © Copyright by Colin David Furness 2010 ii Group Information Behavioural Norms and the Effective Use of a Collaborative Information System: A Case Study Colin David Furness Doctor of Philosophy Faculty of Information...»

«Analysis of the Present Curriculum: The Kingdom of God, and Proposal for the Future Curriculum of the Presbyterian Church of Korea by Hyeok-Su Chae A Thesis submitted to the Faculty of Knox College and the Pastoral Department of the Toronto School of Theology In partial fulfillment of the requirements for the degree of Doctor of Philosophy in Theology awarded by the University of St. Michael’s College © Copyright by Hyeok-Su Chae 2014 Analysis of The Present Curriculum: The Kingdom of God,...»

«Designing Statistical Language Learners: Experiments on Noun Compounds Mark Lauer Department of Computing Macquarie University NSW 2109 Australia Submitted in Partial Ful llment of the Requirements of the Degree of Doctor of Philosophy December, 1995 Copyright c Mark Lauer, 1995 To Lesley Johnston, without whom nothing good can ever come. Abstract Statistical language learning research takes the view that many traditional natural language processing tasks can be solved by training probabilistic...»

«An Investigation into Communication Strategy Usage and the Pragmatic Competence of Taiwanese Learners of English within a Computer Mediated Activity Thesis Submitted for the Degree of Doctor of Philosophy at the University of Leicester by Ying-Chuan, Wang School of Education University of Leicester September 2008 ABSTRACT In order to increase the competitiveness of Taiwan to international environment, Taiwanese Government has become aware of the importance of English in reforming English...»

«A NEW WAY TO THINK ABOUT PRESS FREEDOM: NETWORKED JOURNALISM AND A PUBLIC RIGHT TO HEAR IN AN AGE OF ―NEWSWARE‖ A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMMUNICATION AND THE COMMITTEE OF GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Mike Ananny March 2011 © 2011 by Michael Joseph Ananny. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a...»

«HUMAN CONTROL OF COOPERATING ROBOTS by Jijun Wang B.S. in Elec-Mechanic Engineering, Tsinghua University, P.R.C., 1993 M.S. in Elec-Mechanic Engineering, Tsinghua University, P.R.C., 1998 Submitted to the Graduate Faculty of The School of Information Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh UNIVERSITY OF PITTSBURGH SCHOOL OF INFORMATION SCIENCES This dissertation was presented by Jijun Wang It was defended on Dec 12th,...»

«The Relationship of Experience to Thought Anil Gupta University of Pittsburgh It is a familiar philosophical idea, one that undoubtedly contains much truth, that experience makes an essential contribution to contents of thoughts—that without a proper relationship to experience, thought is empty. To isolate the truth contained in this idea we need to become clearer on contents of thoughts and on the precise relationship of experience to these contents. These are the goals of the present essay....»

«THE FLOW OF CITY LIFE: AN ANALYSIS OF CINEMATOGRAPHY AND URBAN FORM IN NEW YORK AND LOS ANGELES A Dissertation Presented to The Academic Faculty By Julie Theresa Brand Zook In Partial Fulfillment Of the Requirements for the Degree Doctor of Philosophy in Architecture Georgia Institute of Technology May 2016 Copyright @ Julie Brand Zook, 2016 THE FLOW OF CITY LIFE: AN ANALYSIS OF CINEMATOGRAPHY AND URBAN FORM IN NEW YORK AND LOS ANGELES Approved by: Dr. John Peponis, Advisor Dr. Patrick Keating...»

«DETERMINANTS OF CENTRAL BANK INDEPENDENCE IN DEVELOPING COUNTRIES: A TWO-LEVEL THEORY by Ana Carolina Garriga BA, International Relations, Universidad Católica de Córdoba, 1996 MA, Law of the European Union, Universidad Complutense de Madrid, 1999 Submitted to the Graduate Faculty of the College of Arts and Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh UNIVERSITY OF PITTSBURGH COLLEGE OF ARTS AND SCIENCES This dissertation...»

« -:     1  ,  In a memorable children’s story, Winnie-the-Pooh follows the tracks of what he thinks might be a woozle; until he realizes that he has been ‘Foolish and Deluded’ and that the tracks are his own.2 Similar ideas recur in many stories from classical mythology to science fiction.3 Characters take various attitudes towards...»

«Learning, Simplicity, Truth, and Misinformation Kevin T. Kelly Department of Philosophy Carnegie Mellon University kk3n@andrew.cmu.edu March 5, 2005 Abstract Both in learning and in natural science, one faces the problem of selecting among a range of theories, all of which are compatible with the available evidence. The traditional response to this problem has been to select the simplest such theory on the basis of “Ockham’s Razor”. But how can a fixed bias toward simplicity help us...»

«THE INFLUENCE OF CONTEXTUAL FACTORS ON COMMUNITY REINTEGRATION AMONG SERVICE MEMBERS INJURED IN THE GLOBAL WAR ON TERRORISM A Dissertation Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Parks, Recreation, and Tourism Management by Brent Lindsay Hawkins August 2013 Accepted by: Dr. Francis A. McGuire, Committee Chair Dr. Alison L. Cory Dr. Sandra M. Linder Dr. Thomas W. Britt ABSTRACT Community reintegration...»





 
<<  HOME   |    CONTACTS
2016 www.dissertation.xlibx.info - Dissertations, online materials

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.