Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 15 of 15

Full-Text Articles in Cataloging and Metadata

Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian) Mar 2023

Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian)

Library Philosophy and Practice (e-journal)

Abstract

Purpose: The purpose of this research paper is to explore ChatGPT’s potential as an innovative designer tool for the future development of artificial intelligence. Specifically, this conceptual investigation aims to analyze ChatGPT’s capabilities as a tool for designing and developing near about human intelligent systems for futuristic used and developed in the field of Artificial Intelligence (AI). Also with the helps of this paper, researchers are analyzed the strengths and weaknesses of ChatGPT as a tool, and identify possible areas for improvement in its development and implementation. This investigation focused on the various features and functions of ChatGPT that …


Creating Data From Unstructured Text With Context Rule Assisted Machine Learning (Craml), Stephen Meisenbacher, Peter Norlander Dec 2022

Creating Data From Unstructured Text With Context Rule Assisted Machine Learning (Craml), Stephen Meisenbacher, Peter Norlander

School of Business: Faculty Publications and Other Works

Popular approaches to building data from unstructured text come with limitations, such as scalability, interpretability, replicability, and real-world applicability. These can be overcome with Context Rule Assisted Machine Learning (CRAML), a method and no-code suite of software tools that builds structured, labeled datasets which are accurate and reproducible. CRAML enables domain experts to access uncommon constructs within a document corpus in a low-resource, transparent, and flexible manner. CRAML produces document-level datasets for quantitative research and makes qualitative classification schemes scalable over large volumes of text. We demonstrate that the method is useful for bibliographic analysis, transparent analysis of proprietary data, …


Streaminghub: Interactive Stream Analysis Workflows, Yasith Jayawardana, Vikas G. Ashok, Sampath Jayarathna Jan 2022

Streaminghub: Interactive Stream Analysis Workflows, Yasith Jayawardana, Vikas G. Ashok, Sampath Jayarathna

Computer Science Faculty Publications

Reusable data/code and reproducible analyses are foundational to quality research. This aspect, however, is often overlooked when designing interactive stream analysis workflows for time-series data (e.g., eye-tracking data). A mechanism to transmit informative metadata alongside data may allow such workflows to intelligently consume data, propagate metadata to downstream tasks, and thereby auto-generate reusable, reproducible analytic outputs with zero supervision. Moreover, a visual programming interface to design, develop, and execute such workflows may allow rapid prototyping for interdisciplinary research. Capitalizing on these ideas, we propose StreamingHub, a framework to build metadata propagating, interactive stream analysis workflows using visual programming. We conduct …


Automatic Metadata Extraction Incorporating Visual Features From Scanned Electronic Theses And Dissertations, Muntabir Hasan Choudhury, Himarsha R. Jayanetti, Jian Wu, William A. Ingram, Edward A. Fox Jan 2021

Automatic Metadata Extraction Incorporating Visual Features From Scanned Electronic Theses And Dissertations, Muntabir Hasan Choudhury, Himarsha R. Jayanetti, Jian Wu, William A. Ingram, Edward A. Fox

Computer Science Faculty Publications

Electronic Theses and Dissertations (ETDs) contain domain knowledge that can be used for many digital library tasks, such as analyzing citation networks and predicting research trends. Automatic metadata extraction is important to build scalable digital library search engines. Most existing methods are designed for born-digital documents, so they often fail to extract metadata from scanned documents such as ETDs. Traditional sequence tagging methods mainly rely on text-based features. In this paper, we propose a conditional random field (CRF) model that combines text-based and visual features. To verify the robustness of our model, we extended an existing corpus and created a …


Opening Books And The National Corpus Of Graduate Research, William A. Ingram, Edward A. Fox, Jian Wu Jan 2020

Opening Books And The National Corpus Of Graduate Research, William A. Ingram, Edward A. Fox, Jian Wu

Computer Science Faculty Publications

Virginia Tech University Libraries, in collaboration with Virginia Tech Department of Computer Science and Old Dominion University Department of Computer Science, request $505,214 in grant funding for a 3-year project, the goal of which is to bring computational access to book-length documents, demonstrating that with Electronic Theses and Dissertations (ETDs). The project is motivated by the following library and community needs. (1) Despite huge volumes of book-length documents in digital libraries, there is a lack of models offering effective and efficient computational access to these long documents. (2) Nationwide open access services for ETDs generally function at the metadata level. …


A Survey Of Archival Replay Banners, Sawood Alam, Mat Kelly, Michele C. Weigle, Michael L. Nelson Jan 2018

A Survey Of Archival Replay Banners, Sawood Alam, Mat Kelly, Michele C. Weigle, Michael L. Nelson

Computer Science Faculty Publications

We surveyed various archival systems to compare and contrast different techniques used to implement an archival replay banner. We found that inline plain HTML injection is the most common approach, but prone to style conflicts. Iframe-based banners are also very common and while they do not have style conflicts, they suffer from screen real estate wastage and limited design choices. Custom Elements-based banners are promising, but due to being a new web standard, these are not yet widely deployed.


Infographics: A Practical Guide For Librarians, Darren Sweeper Feb 2017

Infographics: A Practical Guide For Librarians, Darren Sweeper

Sprague Library Scholarship and Creative Works

No abstract provided.


Databrarianship: The Academic Data Librarian In Theory And Practice, Darren Sweeper Dec 2016

Databrarianship: The Academic Data Librarian In Theory And Practice, Darren Sweeper

Sprague Library Scholarship and Creative Works

No abstract provided.


Data Visualizations And Infographics, Darren Sweeper Sep 2016

Data Visualizations And Infographics, Darren Sweeper

Sprague Library Scholarship and Creative Works

No abstract provided.


Comparing Institutional Repository Software: Pampering Metadata Uploaders, Craighton Hippenhammer Apr 2016

Comparing Institutional Repository Software: Pampering Metadata Uploaders, Craighton Hippenhammer

Faculty Scholarship – Library Science

This article highlights the key concepts of institutional repositories and identifies the strengths of Digital Commons and Wesleyan Holiness Digital Library products. Special attention is given to software structures and features, support systems, and factors that impact quality. Parts of this article were given as an Association of Christian Librarians annual national conference workshop presentation presented at Carson-Newman University, Jefferson City, Tennessee, June 11, 2015.


Linked Data Demystified: Practical Efforts To Transform Contentdm Metadata For The Linked Data Cloud, Silvia B. Southwick, Cory K. Lampert Nov 2012

Linked Data Demystified: Practical Efforts To Transform Contentdm Metadata For The Linked Data Cloud, Silvia B. Southwick, Cory K. Lampert

Library Faculty Presentations

The library literature and events like the ALA Annual Conference have been inundated with presentations and articles on linked data. At UNLV Libraries, we understand the importance of linked data in helping to better service our users. We have designed and initiated a pilot project to apply linked data concepts to the practical task of transforming a sample set of our CONTENTdm digital collections data into future-oriented linked data. This presentation will outline rationale for beginning work in linked data and detail the phases we will undertake in the proof of concept project. We hope through this research experiment to …


Evaluating And Implementing Web Scale Discovery Services: Part Two, Jason Vaughan, Tamera Hanken Jul 2011

Evaluating And Implementing Web Scale Discovery Services: Part Two, Jason Vaughan, Tamera Hanken

Library Faculty Presentations

Part Four: Quick Tour of the Current Marketplace:

  • "The Big 5"
  • Similarities and differences

Part Five: It's Not All Sliced Bread:

  • Shortcomings of web scale discovery

Part Six: Implementation (pre launch steps):

  • Selecting and preparing implementation staff
  • Preparing and communicating process/decisions with all staff
  • Working with the vendor (roles, expectations, timeline)
  • Workflow changes and implications (technical services)

Part Seven: Specific implementation tasks, issues, and considerations:

  • Record loading and mapping (catalog content)
  • Harvesting and mapping digital/local content
  • Working with central index data (internal & external content)
  • Web integration and customization
  • Assessment and continuous improvement


Evaluating And Implementing Web Scale Discovery Services: Part One, Jason Vaughan, Tamera Hanken Jul 2011

Evaluating And Implementing Web Scale Discovery Services: Part One, Jason Vaughan, Tamera Hanken

Library Faculty Presentations

Preface: Before Web Scale Discovery

  • A very brief overview

Part 1: What is Web Scale Discovery

  • Content
  • Technology

Part 2: Why is Web Scale Discovery important?

  • What’s the need?
  • How is it different from earlier attempts at broad discovery?

Part 3: A Framework for Evaluating Web Scale Discovery Services

  • What we did at UNLV
  • Other options




Skos And The Semantic Web: Knowledge Organization, Metadata, And Interoperability, Eric A. Robinson Jan 2010

Skos And The Semantic Web: Knowledge Organization, Metadata, And Interoperability, Eric A. Robinson

Other Topics

The Simplified Knowledge Organization System (SKOS) is a Semantic Web framework, based on the Resource Description Framework (RDF) for thesauri, classification schemes and simple ontologies. It allows for machine-actionable description of the structure of these knowledge organization systems (KOS) and provides an excellent tool for addressing interoperability and vocabulary control problems inherent to the rapidly expanding information environment of the Web. This paper discusses the foundations of the SKOS framework and reviews the literature on a variety of SKOS implementations. The limitations of SKOS that have been revealed through its broad application are addressed with brief attention to the proposed …


Cataloging Expert Systems: Optimism And Frustrated Reality, William Olmstadt Feb 2000

Cataloging Expert Systems: Optimism And Frustrated Reality, William Olmstadt

E-JASL 1999-2009 (Volumes 1-10)

There is little question that computers have profoundly changed how information professionals work. The process of cataloging and classifying library materials was one of the first activities transformed by information technology. The introduction of the MARC format in the 1960s and the creation of national bibliographic utilities in the 1970s had a lasting impact on cataloging. In the 1980s, the affordability of microcomputers made the computer accessible for cataloging, even to small libraries. This trend toward automating library processes with computers parallels a broader societal interest in the use of computers to organize and store information. Following World War II, …