Skip to Main Content

Generative AI & Legal Research

A guide for students and faculty on using generative AI as a tool for legal research and writing

What is Artificial Intelligence?

What is Artificial Intelligence? 

Artificial intelligence draws from various disciplines, such as computer science, math, philosophy, biology, cognitive science, psychology, and neuroscience. The technical definition for Artificial intelligence is a technology that enables computers or machines to have capabilities that match or exceed those of humans. It is the umbrella term for technologies meant to achieve tasks that would otherwise require human intelligence or intervention. These include machine learning, deep learning, large language models, natural language models, and neural networks. The AI sub-fields share common traits. They utilize algorithms that seek to create intelligent systems that help us make predictions, decisions, or classifications based on input data. Currently, deep learning is the most advanced and mature artificial intelligence that most closely mimics human intelligence. 

Taken from LSU "What is AI?"





Machine and Deep Learning

Machine learning (ML) is the science of training a computer program or system to perform tasks without explicit instructions. Computer systems use ML algorithms to process large quantities of data, identify data patterns, and predict accurate outcomes for unknown or new scenarios.

Deep learning is a subset of ML that uses specific algorithmic structures called neural networks modeled after the human brain. These methods attempt to automate more complex tasks that typically require human intelligence.

Like humans and animals, artificial intelligence can learn by trial and error. By optimizing an objective function that penalizes mistakes, the machine can learn how to perform a task correctly and then use that knowledge (as experience) the next time it encounters the same problem. Machine learning algorithms break down data into small and abstract parts to understand information they have not seen before.

Learning can also be supervised or unsupervised. In supervised learning, an algorithm is presented with labeled data that tells the program which classes the data items belong to. Presented with pictures of cats and dogs, for example, the machine would at least initially need help distinguishing between the two. Humans can help the machine by repeatedly saying, “This is a cat,” “This is not a cat; this is a dog,” etc. The machine learns the generic features to the whole dataset (such as four legs, two ears, a tail) and specific to individual classes (cat-only features; dog-only features). The machine would learn that two ears and a tail aren’t enough data; it would also look for class-specific features, such as whiskers. The main reason for the success of supervised learning is the availability of large volumes of labeled training data and great computing power. In the real world, acquiring large volumes of labeled training data can be expensive and impractical. Unsupervised learning is getting more and more attention as it allows learning from unlabeled data.  

In deep learning, layers of artificial neurons (called a deep neural network) are stacked together to learn representations that loosely resemble a human brain. Presented with a picture of a horse, the neural network layers learn increasingly sophisticated representations that are used as input to the next layer. The initial layer would learn the image's representation of lines and edges. The next layer would then use these representations to learn the representation of simple shapes, and so on. To arrive at “animal” or even “horse” independently, the machine must reason based on context and go beyond drawing simple inferences.  

Many current implementations of deep learning fall in computer vision, making it possible for the machine to “see” the world differently. This is necessary for both facial recognition and self-driving cars. Analyzing spatial relationships, such as angle, distance, and depth, is key to the machine drawing the right conclusion. The same is true of light and shadow. When a self-driving car encounters a stop sign with a sticker on it that also stands in half-shadow from a nearby building—will it still “see” the stop sign? Perception remains one of the biggest challenges in deep learning.

Taken from LSU "What is AI?"


Large Language Models

Large language models (LLM) are massive deep-learning models pre-trained on vast data. LLMs are designed to understand and generate text like humans and other forms of content based on the vast amount of data used to train them. They can infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks.

They can do this thanks to billions of parameters that enable them to capture intricate patterns in language and perform a wide array of language-related tasks. LLMs are revolutionizing applications in various fields, from chatbots and virtual assistants to content generation, research assistance, and language translation. IBM "What are large language models." 


Further resources for understanding AI

Chastek Library, Gonzaga University School of Law | 721 N. Cincinnati St. Spokane, WA 99220-3528 | 509.313.3758