# 6-23-25
## Meeting Start
- Meeting started at 01:15
## Course Introduction
- Introduction to MATH307: Scientific Computing
- Overview of course content and assessments
- Course materials have been published on Canvas
- Standard syllabus available on course web space
- Web space can be found by searching 'Scott Strong Math'
## Website Access
- Add 'math' to search to avoid confusion with other Scott Strongs
- Direct link to website provided if DNS server issues occur
## Course Content Management
- Final grades calculation method to be discussed
- Changes to deliverables for summer session due to smaller class size
- Link to general course description to be removed from top level
- Focus on organizing content for Introduction to Scientific Computing for Summer 2025
## Instructor Information
- Instructor: Scott Strong, flexible with name usage
- Biographical information to be shared in slides
## Course Description
- Course designed for STEM audience of applied scientists and engineers
- Scientific computing as an offshoot of applied mathematics
## Course Schedule
- Course meeting times: Monday to Thursday, 01:15 to 3PM
- Location: M Z 026
- Two fifty-minute lectures with a five-minute break
## Lecture and Office Hours Details
- Transition between computers and boards during lectures
- Two types of office hours: open and by appointment
- Open office hours location: Stratton Hall 102
- Open office hours on Monday, Tuesday, and Thursday
- No open office hours on Wednesdays
- Exceptions: July 8 and July 29, open hours will be rescheduled
- Additional office hours from 3PM to 4PM, Monday to Thursday, in Stratton 102
- Additional office hours by appointment to be held in the computer lab
- Hyperlink provided for booking individual appointments in 15-minute blocks
## Scheduling Issues
- Previous scheduling issue: course changed to MWF 50-minute sessions, reducing contact time
## Grading System
- Introduction of specifications grading: tasks need to be completed to a specified level
## Use of Technology in Course
- Discussion on the use of Generative AI in academic work
- Use of school's large language model interface for educational purposes
- Interface provides hints but does not require coding from scratch
- Emphasis on working within C language
- Safeguards on generative AI have been adjusted for course use
- Emphasis on visual aids like pictures and graphs in computational mathematics
- Loosened restrictions on educational generative AI interface for quicker task completion
- Generative AI interface includes pre-class objectives and interactive activities
## Classroom Practices
- Pictures of the board are taken for students who are absent
- Emphasis on attending classes as board work is hard to interpret later
- Post-class assignment: interact with board content, due within a couple of days
## Learning Strategies
- Emphasis on multiple interactions with content for better understanding
- Pre-homework activities to encourage pre-thinking about homework topics
## Generative AI Interactions
- Students expected to complete 75% of interactions with generative AI to pass
- Interactions designed to be brief and not time-consuming
## Homework and Grading Details
- Grades are broken down into letter grades for participation
- Six homework assignments planned, each limited to a single front side of a page
## Homework Structure and Grading
- Homework sets consist of five problems each
- Grading: 2 means complete, 1 indicates issues to address
- Generative AI can assist when students hit a wall in the decoding process
## Homework Completion Criteria
- Zero given for incomplete homework with no revision opportunity
- Completion of 80% of total problems required for homework
- Minimal software engineering knowledge required for computational mathematics
- Grade variations: 75-80% completion for C-, 80-85% for C+, >85% for B-
## Problem Revision Policy
- Students may revise level one problems to achieve completeness
## Additional Grading Components
- Micro projects introduced for higher grades (B and A)
## Presentation and Homework Impact
- PowerPoint slides will be used to discuss point of view, worth 1-2 points
- Completion of B or C level homework affects grading, treated as 80% completion
## Project Grading Criteria
- Ten two-point projects will result in a 100% grade
- Five two-point projects will result in a 90% grade, treated as an A-
- A grades start at 95%
- 10 to 12 different one to two point projects are being developed to apply scientific computing in practical scenarios
## Use of Generative AI in Projects
- Completing more than 85% of homework grants two additional points, starting at 82% for B-level work
- Generative AI can be used to enhance codes for scientific computing projects
## Interaction and Project Development
- Interaction with generative AI is encouraged for homework and micro projects
- Micro projects aim to build a comprehensive code library
## Instructor's Insights
- Encouragement to have fun with coding and utilize modern tools
- Instructor has been teaching since 2001, witnessing the growth of the Internet
## Adapting to Information Availability
- Educators need to adapt to the availability of information online, similar to the impact of Wikipedia
## Course Introduction
- Course is an introduction to scientific computing
- Instructor: Scott Strong, grew up in North Pole, Alaska
## Instructor's Background
- Instructor was born in Enid, Oklahoma, and moved to North Pole at age one
- Moved to Anchorage, Alaska, at age seven
- Worked at Arlist library in Anchorage for two summers
- Worked as a grocery store clerk at Fred Meyer in the Pacific Northwest for two summers
## Teaching Experience
- Started teaching C++ in 2001 using transparency machines
## Academic Background
- Holds a Master's in Science in Applied and Computational Mathematics
- PhD in Applied Physics, focusing on mathematical physics of one-dimensional ambient space
- Expertise in vector and tensor analysis, differential geometry, and nonlinear PDEs
## Personal Anecdotes
- Instructor shared a story about driver's license picture and headshots
- School started bringing in people for headshots before it became common practice
## Personality Insights
- Instructor consistently scores as INTP on Myers Briggs type indicator
- Instructor took an alignment test: 92% true neutral, 69% chaotic good, 69% neutral good
## Teaching Anecdotes
- Instructor no longer does practical jokes on April Fools due to a past incident with a fake electronic quiz
- Shared an anecdote about a prank involving a Futurama splash screen in class
- Last practical joke involved a prank that made a student so anxious they vomited
## Personal Preferences
- Prefers dogs over cats
- Would like to be an owl in the air, a turtle in the sea, obsidian as a rock, tulips as a flower, and orange as a color
- Believes comedy is better than tragedy
## Interactive Teaching Methods
- Occasionally uses Mentimeter system for interactive sessions
## Interactive Teaching Methods
- Instructor plans to start another Mentimeter session
- Instructor felt they had a rough go earlier in the year
- Instructor is using a word cloud visualization in class
## Group Dynamics
- Discussed the presence of a mathematician in the group
- Noted a significant presence of mechanical engineering students in the course
- Instructor found a new icebreaker question for interactive sessions
- Plans to use scaled questions and information polling in future sessions
## Engineering Relevance
- Discussed debates about parking spots and control time
## Course Content
- Course will cover implementing algorithms and procedures
- Focus on encoding and approximating solutions
- Discussed the importance of approximations in computing and selecting appropriate algorithms for specific problems
- Emphasized understanding the margin of error in numerical approximations
- Focus on solving algebraic equations using techniques for systems with many variables
- Representation of discrete data from digitized aspects of the real world will be a key feature
- Students should have completed a calculus sequence and a course in ODEs concurrently or prior
## Sound and Mechanical Engineering
- Instructor plans to discuss sound generation and its relevance to mechanical engineering
- Discussed storing sounds digitally and breaking them into components
- Discussed the conversion of mechanical energy to electrical information in the ear
- Instructor has videos from last year that may still be relevant for current discussions
## Numerical Path Representation
- Discussed numerical representation of paths and curves in space
- Discussed solving equations to determine when algebraic terms representing paths are equal
## Computer Graphics Discussion
- Discussed the visual representation of water in computer graphics
- Mentioned issues with ray tracing affecting visual quality
- Ray tracing significantly enhances the visual representation of water in computer graphics
- New chips released last summer may have contributed to improvements in ray tracing
- Photons in ray tracing follow straight line paths and interact with virtual objects
- Discussed computational problem of tracing rays and computing light intensity through integrals
- Discussed the difference between raw data and expected visual output in computer graphics
## Animation Techniques
- Discussed interpolating frames to create smooth motion in animations
- Generative AI was used last year for interpolating frames in animations
- Stitching frames together improves with closely aligned frames, reflecting the state of generative AI last year
- Generative AI in animations can humanize robotic movements
## Robotics Discussion
- Robots exhibit human-like errors, adding a relatable aspect to their operation
- Boston Dynamics released a new robot model with enhanced rotational capabilities
## Computational Crash Tests
- Discussed the representation of solid objects in computational crash tests
## Future Plans
- Discussed potential projects and problems to tackle this summer
## Programming Languages Discussion
- MATLAB will be used as the standard environment for code development
- Generative AI facilitates quick porting of code from MATLAB to Python or R
- Students have the option to use Python, MATLAB, or R for their projects
## Language Preferences
- Last summer, the course infrastructure was heavily tied to MATLAB, limiting language choice
- 41% of students prefer MATLAB for their projects
- MATLAB is commonly used in industry, but many firms are transitioning to Python due to cost and capabilities
- MATLAB usage in companies can restrict personal projects or side gigs, pushing individuals towards Python for more flexibility
- Preference towards Python due to its open-source nature and flexibility
- Generative AI can be used to port code between MATLAB and Python
- Students encouraged to explore multiple programming languages
- R is acknowledged as more of a statistics program but capable of similar tasks
## AI and Machine Learning Integration
- Introduction of the high TA system, a user interface leveraging OpenAI and ML anthropic APIs
- The system is connected to course knowledge and designed to be educational in its responses
- The high TA system prioritizes education over solution provision
## Critical Thinking and AI Usage
- Emphasized the importance of critical thinking when using advanced AI systems
- Encouraged clarity on personal development versus tool utility in course projects
## Hi TA System Enhancements
- Hi TA system will incorporate worksheets, transcriptions, and summarized lectures into its knowledge base
- Content in the Hi TA system will be anonymized to improve understanding and response tailoring
## Hi TA System Interaction Guidelines
- Discussion on the movie 'Hidden Figures' and its relevance to understanding ordinary differential equations in space missions
- High TA link will be available on Canvas for students
- Access to Hi TA system requires authentication through Mynd's user accounts
- High TA interactions have due dates on Canvas, with objectives due before the course and reflections due two days later at 11:59 PM
- Due dates for High TA interactions are considered soft deadlines
## Lecture Objectives
- LO1 has been completed and posted to the website
- Interface allows side-by-side view on the website, embedded into Canvas
- Lectures will focus on identifying how linear transformations affect standard basis vectors
- Interaction with the Hi TA system is designed to be brief, ideally between 10 to 20 minutes
- Conversations with the Hi TA system can be exported as a markdown file from the website
## Hi TA System Submission and Feedback
- Submit markdown file of Hi TA interactions to Canvas as part of the process
- Interactions should be spaced to enhance learning through neuron activation
- Hi TA system provides feedback on task completion status
- Recommended interaction time with Hi TA is 10 to 20 minutes
- Encouragement to work with the Hi TA system and provide feedback for adjustments
## Source Material Access
- Source material for problems is publicly available through the website
## Additional Resources
- Homework assignments are available on scottastrong.org
- High TA guides available showing examples of interactions and capabilities
- Lecture objectives and reflections available on boards
## Hi TA System Usage Tips
- Check-in with Hi TA to ensure all points are covered before submission
- Look for the word 'complete' in Hi TA feedback before submitting
- Ongoing efforts to link Canvas and Hi TA system
- No required textbook; information will be provided in lectures
## Mathematica Usage
- Lecture duration is approximately 50 minutes
- Mathematica prefers exact symbolic calculations over double precision
- Expertise in using Mathematica for mathematical tasks
- Option to use Python or MATLAB for majors requiring experience in these languages
## Attendance and Participation
- Attendee will miss Thursday, Monday, and Tuesday due to a funeral
- Plan to take pictures of the boards and upload to Hi TA
- Plan to discuss a better strategy for capturing board information on Wednesday
## Hi TA System Navigation
- Access recent conversations on Hi TA webpage through embedded view
# 6-24-25
# MATH307 Scientific Computing - Day 2 Meeting Notes
## June 24, 2025
## Meeting Overview
**Topic:** Matrix Multiplication Implementation and Linear Transformations
**Duration:** ~105 minutes (two 50-minute sessions with break)
**Location:** M Z 026
---
## Course Administration Updates
### Website Resources
- **Homework assignments** now embedded on course website with AI-generated concepts/vocabulary
- **Hi TA guides** available showing interaction examples and capabilities
- **Learning objectives (LOs)** loaded into Hi TA system with 3 bullet points each
- **Lecture boards** section updated with Monday's boards (will expand to 3 pages: boards, audio summaries, notes)
- **Projects section** populated with micro-projects worth 1-2 points each for B/A grades
### Hi TA System Changes
- **Lecture reflections** simplified from 2 per class to 1 per day (24 total for course)
- **Export process**: Access markdown export via 3-dots menu when in conversation (not activity view)
- **Unknown checkbox** in activities - instructor unsure of function, likely for future Canvas integration
- **Completion verification**: Ask the bot if work is complete rather than relying on checkbox
---
## Technical Implementation: Matrix Multiplication
### Mathematical Foundation
**Matrix multiplication formula:** For A ∈ ℝᵐˣⁿ and B ∈ ℝᵖˣᵠ where n = p:
```
C[i,j] = Σ(k=1 to n) A[i,k] × B[k,j]
```
### Algorithm Structure
**Three nested loops required:**
1. **Outer loop:** i = 1 to M (output rows)
2. **Middle loop:** j = 1 to Q (output columns)
3. **Inner loop:** k = 1 to N (accumulation)
**Computational cost:** O(N³) for square matrices
### MATLAB Implementation
#### Matrix Definition and Syntax
- **Matrix creation:** Use semicolons for new rows, commas for new columns
- **Example:** `A = [1, 2; 3, 4]` creates 2×2 matrix
- **Dimension access:** `size(A,1)` for rows, `size(A,2)` for columns
- **Matrix initialization:** `zeros(M,Q)` creates M×Q zero matrix
#### Live Coding Session
```matlab
% Define input matrices
A = [1, 2; 3, 4]; % 2x2 matrix
X = [1; 0]; % 2x1 vector
% Get dimensions
M = size(A,1); N = size(A,2);
P = size(X,1); Q = size(X,2);
% Initialize output matrix
C = zeros(M,Q);
% Triple nested loop implementation
for i = 1:M
for j = 1:Q
for k = 1:N
C(i,j) = C(i,j) + A(i,k) * X(k,j);
end
end
end
```
#### Error Handling Discussion
- **Dimension compatibility:** Need to verify n = p before multiplication
- **Conditional structure:** `if n ~= p` then break/exit
- **MATLAB syntax:** `~=` for "not equal", `==` for equality comparison
- **Program termination:** `exit` terminates entire MATLAB (too aggressive), `break` only works in loops
---
## Linear Algebra Theory
### Identity Matrix
**Definition:** I ∈ ℝⁿˣⁿ where I[i,j] = δᵢⱼ (Kronecker delta)
- δᵢⱼ = 1 if i = j (diagonal elements)
- δᵢⱼ = 0 if i ≠ j (off-diagonal elements)
**Property:** A × I = I × A = A (multiplicative identity)
### Matrix Inverse
For A ∈ ℝ²ˣ² with det(A) ≠ 0:
```
A⁻¹ = (1/det(A)) × [d, -b; -c, a]
```
where A = [a, b; c, d] and det(A) = ad - bc
**Key requirement:** det(A) ≠ 0 (otherwise inverse doesn't exist)
---
## Linear Transformations
### Fundamental Properties
**Transformation T: ℝ² → ℝ²** defined by T(x) = Ax
#### Property 1: Linearity
T(c₁x₁ + c₂x₂) = c₁T(x₁) + c₂T(x₂)
- Maps linear combinations of inputs to linear combinations of outputs
#### Property 2: Origin Preservation
T(0) = 0 (origin maps to origin)
#### Property 3: Parallel Line Preservation
Parallel lines in input space remain parallel in output space
### Standard Basis Transformation Analysis
#### Example 1: Reflection Matrix
```
A = [0, 1; 1, 0]
```
- **I-hat transformation:** [1,0] → [0,1]
- **J-hat transformation:** [0,1] → [1,0]
- **Effect:** Swaps x and y coordinates (reflection across y=x line)
- **Determinant:** det(A) = 1 (no orientation flip)
#### Example 2: Rotation Matrix
```
A = [0, 1; -1, 0]
```
- **I-hat transformation:** [1,0] → [0,-1]
- **J-hat transformation:** [0,1] → [1,0]
- **Effect:** 90° clockwise rotation
- **Determinant:** det(A) = -1 (orientation flip)
### Geometric Interpretation
- **Positive determinant:** Preserves orientation (rotation, scaling)
- **Negative determinant:** Reverses orientation (reflection component)
- **Unit determinant:** Preserves area
- **Parametric lines:** T(P + tV) = T(P) + tT(V) (parallel lines stay parallel)
---
## Computational Visualization
### MATLAB Visualization Tools
- **Basic mesh plotting:** Shows before/after grid transformations
- **AI-generated enhancement:** Animated visualization showing continuous transformation
- **Animation feature:** Parameter sweeping to show transformation process dynamically
### Key Observations from Animations
1. **Reflection transformation:** Grid appears to "flip" during animation
2. **Rotation transformation:** Grid rotates continuously
3. **Parallel preservation:** Grid lines maintain parallelism throughout
4. **Origin fixation:** Center point remains stationary
---
## Programming Environment Notes
### MATLAB vs Python Discussion
- **MATLAB:** Primary environment for course (Matrix Laboratory)
- **Python option:** Available for students preferring open-source
- **Code portability:** Generative AI can assist with language translation
- **Industry trends:** Companies moving from MATLAB to Python for cost/flexibility
### Error Debugging
- **Common issues:** Case sensitivity (`zeros` vs `Zeros`)
- **Semicolon behavior:** Suppresses output display but code still executes
- **File execution:** Returns filename when semicolons used (normal behavior)
---
## Upcoming Work
### Homework Assignment Progress
- **Problem 1:** Should be completable after Day 1 content
- **Problem 2:** Should be completable after Day 2 content
- **Problem 3:** Will be covered in upcoming sessions
### Next Session Topics
1. **Error handling** implementation in matrix multiplication
2. **Eigenvalue/eigenvector** introduction
3. **Advanced transformation** analysis
4. **Mesh visualization** code distribution
---
## Action Items
### For Students
- [ ] Complete Hi TA interactions for Day 2 learning objectives
- [ ] Practice matrix multiplication implementation in chosen language
- [ ] Begin Problem 2 of homework assignment
- [ ] Export and submit Hi TA conversations to Canvas
### For Instructor
- [ ] Research proper MATLAB program termination commands
- [ ] Publish mesh visualization codes to website
- [ ] Create Python equivalent of matrix multiplication implementation
- [ ] Update course website with Day 2 materials
# 6-25-25
## Lecture Updates
- Lecture pictures have been posted.
- Learning reflections encourage reviewing lecture pictures and asking questions
- Learning objectives 5 and 6 are not yet available as they cover Taylor series, which hasn't been covered yet
- Learning objectives 5 and 6 will be available on Canvas tomorrow
- Learning objectives 7 and 8 have been unpublished to maintain consistency
- Lectures are planned to cover two learning objectives per day, but this may vary
## Matrix Discussions
- Discussion on matrices and coding with matrices is ongoing
- Aim to build intuition on matrix transformations without a full linear algebra class
- Upcoming discussion on eigen problems and their applications to regular stochastic processes
- Explanation of matrix jargon and its connection to prior topics
- All matrix multiplication code is available on high TA, including MATLAB and Python modules
- MATLAB code has been ported to Python for convenience
- Ports to R or Mathematica can be made upon request
- Discussion topic took longer than expected, causing some frustration
- Computational costs related to scientific computing were discussed, though considered outside the current scope
- Plan to improve matrix multiplication code by addressing the triple loop structure
- New project to compare performance of matrix multiplication code across different platforms
- Project on matrix multiplication runtime to be used as a basis for comparison in class discussions
## Error Handling in MATLAB
- Discussion on error handling in MATLAB, including the use of the 'error' statement
- The 'error' statement halts execution and displays a message when run within an if statement
- The 'return' command hands off control when encountered
- The 'return' command in MATLAB sends control back to the command line without a display message
## Error Handling in Python
- In Python, error messages can be raised using 'raise Exception', which allows for a custom message
## MATLAB Code Precautions
- Be cautious with MATLAB code as it contains an 'exit' command that will close MATLAB if an error is encountered
- Modify the MATLAB code to remove the 'exit' command before use
- Plan to send the procedure to a function that takes two matrices as inputs
## Matrix Multiplication Project
- Plan to create a project for matrix multiplication outputting their product
- Include error handling with if statement for dimensionality issues
- Discussion to continue tomorrow on integrating this into a function
## New Projects
- New project to compare code against computer's multiplication algorithm for speedup analysis
- Project will be a collaborative effort to track progress in class
- Draft for the new project is ready for release
- Internal routines handle system memory uniquely to find speedup
- Explore the use of sparse routines for matrices with many zeros to improve efficiency
- Information on sparse routines will be released on the website
## Code Base Updates
- Edits to the code base will be made today, with updates to be shared tomorrow
- Discussion on computational run times in relation to matrix transformations
## Linear Transformations
- Matrices map the origin to the origin and maintain linear combinations and parallel lines in transformations
## Matrix Transformations
- Explore other parts of linear algebra for potential projects
- Matrix A transforms the standard basis vectors
- A maps input vector to output vector after transformation
- Discussion on color coding in the xy-plane: horizontal as red (i hat) and vertical as blue (j hat)
- I hat remains unchanged after transformation by matrix A
- J hat transforms to two units right and one unit up after transformation by matrix A
- Discussed benefits of using standard basis vectors for transformations
- Discussion on the unit square and its transformation under matrix A
## Vector Manipulation
- Attempt to create a vector from the tip of I hat parallel to a non-straight line
- Use red to take its tip to the tail of blue and go parallel
- Transformation of unit square into a parallelogram using matrix A
- Transformation described as a shearing, similar to pushing a table to feel forces
- Cartesian grid described as malleable, bending into a parallelogram
## Parallelogram Area Calculation
- Area of the parallelogram is calculated as base times height
- Base of the parallelogram is the I hat vector with length one
- Height is determined by the shift of J hat vector, two units to the right
- Unit square transforms into a parallelogram with unit area
- No area changes observed, determinant of matrix is one
- Reference to multivariate calculus: transformation from x's and y's to r's and thetas involves Jacobian matrix determinant
- Embedded in the determinant calculation
## Shear Transformation Details
- Shear transformation acting on I hat and J hat
- Encouragement to run calculations for better understanding
- Preparation for visual representation with axes for picture
- Mathematicians often include humor in linear algebra books
- Waiting for thirty more seconds before proceeding
- Preparing to transfer unit square for transformation analysis
## Matrix Transformation Analysis
- Matrix transformation applied to I hat results in a vector with components (1, 1)
- Matrix transformation applied to J hat results in a vector with components (0, 1)
- I hat mapped to vector (1, 1) in the output plane, labeled as red
- J hat mapped to vector (0, 1) in the output plane, labeled as blue
- Horizontal transformation results in negative shift instead of positive
- Reference to 'box trick' for visualization
- Tail to tip, tip to tail method discussed for vector manipulation
- Familiarity with geometric concepts emphasized
## Rectangular Shape Analysis
- Discussion on the rectangular shape formed by red and blue vectors at a 90-degree angle
- Calculation of the area of the rectangular shape
- Width of red vector is 1, hypotenuse calculated as square root of 2
- Area of rectangular shape calculated as 2, indicating dilation of space
## Rotation and Scaling Effects
- Rotation and scaling of square increases area
- Determinant of matrix A calculated as two, indicating area increase
## Transformation Properties
- Rotations and shear transformations can occur simultaneously
- Compression occurs when determinant is between zero and one
- Negative determinant indicates a flip
- Two flips result in no change, similar to looking in a mirror twice
- If determinant is greater than one, it enlarges the area or volume
## Transition to Computer Analysis
- Transition to computer-based analysis initiated
- Code generated by machines includes animation feature for visualizing underlying space
- First matrix inputted for analysis, expected to be uninteresting
- Animation shows frame by frame transformation with a parameter to shear
- Humorous reference to 'sheared sheep' in linear algebra context
- Second matrix (1, -1, 1, 1) applied, resulting in rotation and enlargement
- Animation shows rotation and scaling, multiplying area of cells from ones to twos
- Discussion on matrix with determinant of zero from homework
- Discussion on matrix with determinant of zero indicating loss of dimension
## Dimensional Reduction Discussion
- Condensing two-dimensional plane to one-dimensional line subpart discussed
- Determinant indicates loss of dimension, but not the extent of loss
- In three-dimensional space, reduction can be to two or one dimension
- Discussion on determinant vanishing indicating complete loss of area, potentially reducing to a point
## Eigenvalue Analysis
- Examination of dependency relations within columns
- Counting zero eigenvalues to determine significance
## Simplifying Transformations
- Discussion on simplifying complex transformations by extracting simpler structures
- Discussion on eigenvalues and eigenvectors in transformation
## Shear Transformation Example
- Mona Lisa example used to illustrate shear transformation
- Top part of Mona Lisa pushed to the right, demonstrating vector effect
- Discussion on vector transformation from domain to range
- Discussion on the concept of vector having direction and magnitude
- Discussion on changes in direction and magnitude of vectors
## Space Manipulation Discussion
- Discussion on manipulating space like Play Doh, pushing and pulling to grow the space
- Attempt to identify a blue vector that maintains its direction during transformation
- Discussion on subdomains or subspaces where vectors maintain their direction during transformation
- Discussion on vectors whose directions remain unchanged during transformation, despite stretching or compressing
- Discussion on maintaining vector direction while allowing changes in length
- Goal to find lambda and x that satisfies the equation for unchanged direction
## Eigenvector and Eigenvalue Discussion
- Discussion on eigenvectors being special to a transformation as their direction doesn't change
- Lambda is identified as the corresponding eigenvalue to eigenvector x
- Issue identified: Cannot subtract a scalar from a matrix, need a matrix to maintain x vector unchanged
- Discussion on homogeneous system with zero solution, where both sides equal zero when x is zero
## Homogeneous Problem Discussion
- Discussion on neglecting the origin point in favor of a subspace line
- Zero is a solution to the homogeneous problem, but seeking non-zero solutions
- Setting matrix to zero to explore non-zero solutions
- Requirement for matrix: determinant must be zero for non-trivial solutions
- Determinant of (A - lambda * I) = 0 for non-trivial x
- Prioritize solving the determinant equation first as Lambda is the only unknown
- Address eigenvectors after solving the determinant equation
- Instructor plans to work through an example, but notes a change in the expected progression
## Matrix Introduction
- Introduction of a matrix: [0, 1; 1, 0] as a segue into the next topic
## Animation and Transformation Discussion
- Animation used to demonstrate transformation of points on a grid
- Discussion on finding one-dimensional invariant subspaces for the matrix [0, 1; 1, 0]
## Eigenvalues and Eigenvectors Importance
- Eigenvalues and eigenvectors provide fundamental information about a matrix
- Expression for characteristic polynomial: Lambda squared minus the trace times Lambda plus the determinant
- Emphasis on using computers for calculations, but understanding the concepts is crucial
## Matrix Properties Discussion
- Trace of the matrix is the sum of the diagonal elements, which is zero
- Determinant of the matrix is negative one
- Lambda one is identified as one and lambda two as negative one
## Technical Setup
- Instructor successfully navigated technical setup with document camera
- Instructor plans to solve only one eigenproblem by hand, rest will be computed using software
## Matrix Line Representation
- Calculation of matrix after subtracting Lambda one from identity matrix results in [-1, 1; 1, -1]
## Equation Solving Reminder
- Reminder of solving equations: negative one times negative plus y equals zero and ex minus y equals zero
- Rows are the same line due to determinant set to zero
- Eigenvector for this eigenvalue is any vector on that line with non-zero length
## Eigenvector Choices
- Eigenvector choices for lambda one include (1,1), (2,2), (-π,-π), but not (0,0)
- Running routine for lambda two equals negative one
## Eigenstructure Overview
- For a 1x1 matrix times [x, y] equals [0, 0], y equals negative x
- When y becomes negative one, x becomes one
- Eigenvalue one has eigenvector (1,1)
- Second eigenvector is (1,-1) with eigenvalue negative one
- Animation shows transformation around the line y equals x
- Animation illustrates the flipping effect of eigenvalue negative one
- Discussion on eigen lines and their visibility in matrix eigenstructure
## Matrix Review and Ion Lines
- Review of matrices and discussion on potential eigenvalues
- Ion lines remain unchanged after transformation
## Recent Achievements
- Successful implementation of multiplication code
## Computational Techniques
- Discussion on copying and pasting techniques in mathematical computations
## Notation and Data Management
- Discussion on matrix size reaching 5.6 megabytes
- Use of underlines for vectors instead of over arrows for clarity
## Document Management Issues
- Folder error encountered in document management
- Suggested solution: recreate the missing folder or choose a different working directory
- MATLAB will rewire your path when trying to run specific class files
- Errors encountered when using x and y as variables
- Discussion on checking process against a vector and handling x and y as zero or one
## Eigenvalues and Eigenvectors
- Plan to modify the system to accept dot n b input
## Meeting Observations
- No one took the five minute break
- Working at the command line in MATLAB to define matrices instead of using scripts
- Use of IGE command in MATLAB to strip out matrix elements
- Discussion on eigenvalues and eigenvectors in relation to matrix A
- Use of Igg A command in MATLAB to process a 2x2 matrix and extract eigenvalues and eigenvectors
## Large Matrix Computation
- Instant computation for large matrices (e.g., million by million) to identify important features
- Discussion on the structure of eigenvectors in matrices and their symmetry
- Discussion on matrix D as a 2x2 matrix with eigenvalues on the main diagonal and zeros elsewhere
- Explanation of eigenvectors in column form with lambda equal to negative one
- Discussion on eigenvectors corresponding to lambda equal to one and negative one
- Discussion on matrix A with elements 0, 1, -1, 0 and its role in rotation
- Animation grids appear identical, indicating a rotation
- New matrix A defined and used to sort eigenvectors and store eigenvalues
## Matrix Transformations
- Plane rotation observed, drawing on Diffie Q intuition for eigenvalues
- Execution of command line operations on matrix A for eigenvalue analysis
- Observation of complex eigenvalues: one is imaginary I, another is negative imaginary I
- Discussion on spirals and complex eigenvalues in relation to Diffie Q knowledge
- Observation of unit square rotation and scaling, increasing area by a factor of two
- Matrix A defined with ones in upper right, lower left, and lower right
- Transformation involves rotation and change in size
- Discussion on the role of real parts in stretching and changing size during rotation
- Discussion on shearing effect on matrix and its impact on eigenvectors
- Eigenvalues are both one, indicating no change in area
- Eigenvector identified as I hat, leaving the horizontal line unchanged
## Linear Algebra Conclusion
- Last topic in the linear algebra section discussed
## Anecdotes
- Humorous anecdote shared about a string and a bartender
## Stochastic Matrices
- Transition to discussion on regular stochastic matrices
- Discussion on left stochastic matrices and their properties
- Stochastic matrix is square with non-negative real entries and columns summing to one
## Deterministic Systems
- Stochastic processes are random processes, opposite of deterministic systems
- Columns summing to one represent a probability state, indicating total probability in the column is one
- Regular stochastic matrix has all entries related to probabilistic settings
- Discussion on matrix powers and their positivity in linear algebra context
- Regular stochastic matrix has positive entries in some power, important for theoretical applications
- Matrix A acts on vector x0 to update it one step forward in time
- Subscript in x0 can be interpreted as time
- Transition from x0 to x2 requires updating x
- X1 is the update of x0, leading to A squared times x0 for the nth state
- Updates by matrix multiplication relate to x0, involving repeated calculations
- Inefficient calculation coded up and repeated n times for regular stochastic matrix
- Limit of x subscript n as n approaches infinity is called p vector
- Stationary vector defined as the point where successive matrix multiplication updates result in no change
- Regular stochastic matrix with non-negative entries can have zero entries
- Elevating a regular stochastic matrix to a power where all entries are positive indicates a stationary vector
## Chessboard Problem Exploration
- Mention of a PBS video exploring knight's moves on a chessboard
## Markov Chain Theory
- Markov chain consists of a state space and a probability transition function
- Example of Markov chain: radio station playing K-pop and Ska as states
- Probability transition: two-thirds chance of playing the same genre next
- One-third probability of switching genres in Markov chain
- Introduction of stationary distribution in Markov chain
- Stationary distribution assigns a number to each state based on long-term behavior
- Stationary distribution example: equal probability distribution between K-pop and Ska
## Algebraic Problem Solving
- Discussion on mathematical results involving one half and one third
- Mention of linear algebra and eigenvectors in relation to these results
- Discussion on addressing algebraic problems in different contexts
## Matrix Label Redefinition
- Diagram with nodes A and B on a graph, allowing transitions A to B, B to A, B to B, and A to A
- Discussion on interpreting drawings from a probability and Markov chain perspective
- Example given with two-thirds probability on one part and one-third on another
- Redefinition of matrix labels from A and B to K and S for clarity
- K represents K-pop and S represents Ska in the matrix
- Filling matrix with data encoded in the graph, focusing on k and a rows and columns
- Element in k row k column represents probability of k transitioning to k
- Probability of A transitioning to K is two-thirds
- Clarification needed on matrix labels, should be rewritten as S's instead of A's
- Transition probability from Ska to K-pop is one-third
- Transition probability from K-pop to Ska is one-third, indicating symmetry
- Transition probability from Ska to Ska is two-thirds
- Columns of the matrix sum to one, confirming probability distribution
- All entries are positive for a matrix raised to the power of one, confirming it as a regular stochastic matrix
- Limit can be attained by repeatedly applying the matrix to initial distributions between K-pop and Ska
- x0 is considered an initial probability vector
- Symmetry in the system may simplify calculations
- Define SK matrix labels consistently, either as row-column or column-row
- K-pop and Ska transition probabilities are one-third, interchangeable without impact
- Personal preference leans towards defining Ska to K-pop transition
- Probability vectors are columns in stochastic matrices
- Important defining characteristic of a probability vector discussed
- Elements of the matrix sum to one, confirming it as a probability matrix
- Question raised about the limit of the matrix as n approaches infinity
- Expectation of a 50-50 distribution between K-pop and Ska after applying matrix n times
## Eigenvector Independence
- Theory A states that if lambda one and lambda two are different, then the eigenvectors are linearly independent
- Colloquial definition of linear independence: two column vectors are linearly independent if they point in different directions in the plane
- Linear independence in R2 involves two different directions
- In higher dimensions, more directions are compared
- Example: East and West as opposite directions on a map
- South is considered the negative direction of North, and they are linearly independent
- East and its negative, North and its negative, are orthogonal directions
- Orthogonal directions allow for complete navigation on a map
- Directions in terms of North and Northeast can still allow full navigation on a map
- Having two different directions in the plane allows for complete directional control
- Any vector in the plane can be made using eigenvectors v1 and v2
- In three dimensions, more permutations need to be considered
- Constants c1 and c2 are used to build vectors
- Linear combinations involve multiplying vectors by constants and adding them together
- Constants c1 and c2 are used in linear combinations to express x vector as c1 v1 plus c2 v2
- Working in R2 involves two directions: v1 and v2
- Directions v1 and v2 may not align with cardinal directions like North or East
- Theoretical ability to determine distance in direction v1 and v2 to reach x0
- If matrix A is raised to the nth power and acts on x0, it is equivalent to A^n acting on c1v1 + c2v2
## Linear Transformations and Combinations
- Linear transformation acts on a linear combination
- Editorial article mentioned linear transformations
- Common math class phrases: derivative of sum is sum of derivatives, integral of sum is sum of integrations
- Linear transformation of a linear combination is the linear combination of the corresponding linear transformations
- Plan to take a board picture and write out details later
- Current focus is on linear transformations, with intention to elaborate
- Matrix n is a linear transformation with many multiplications
- Distribution over linear combination is possible
- Matrix A raised to nth power acts on eigenvector v1
- v1 is an eigenvector with an origin story
- Eigenvector v1 remains in the same direction when acted upon by matrix A
- Eigenvector v1 can be scaled by its corresponding eigenvalue, Lambda
- Lambda one is a scalar and remains unaffected by matrix A
- Matrix A acts on eigenvector v1, producing another Lambda one
- Repeated application of matrix A on v1 results in consistent Lambda one output
- Repeated occurrence of eigenvector v1 results in n many Lambda ones
- Expression: c1 times n many Lambda ones, v1, plus c2 times n many Lambda twos, v2
- Inquiry about the mathematical nature of Lambda
- Discussion on the computational complexity of raising a matrix to a high power
- Scalars, such as lambdas, can be easily raised to high powers using calculators
- Discussion on using computers to determine eigenvalues (Lambdas) and eigenvectors (V's) for large n
- Encouragement to apply matrix repeatedly to vectors to observe behavior
## Demographic Transitions and Migration Patterns
- Discussion on demographic transitions between cities like Denver and LA
- Analogy of K-pop and Ska used to illustrate migration patterns
- Data from U-Haul and moving companies used to track demographic transitions
- Graph and matrix can represent comings and goings between different buckets
- Examples of buckets: states, cities, radio stations
- Matrix values mentioned: one-third, two-thirds
- Discussion on eigenvalues and eigenvectors in relation to matrix power
- Lambda one identified as the first column of matrix D, value is one
- Computers report eigenvectors as unit vectors by default, dividing by their own length
- Unit vectors may have gross entries due to normalization
- As n approaches infinity, raising one-third to higher powers results in it getting smaller
- In the limit, as n goes to infinity, the probability approaches zero
- Probability transition matrix applied to any initial state results in the first term becoming negligible
- Raising one to the nth power remains one, resulting in c two times one
- Inquiry about the distribution of people between K-pop and Ska
- c two is a constant affecting the distribution outcome
- Discussion on vector entries not summing to one, questioning its validity as a probability vector
- Freedom in choosing constant c2 to achieve desired vector effect
- Construction of vector in terms of a probability vector with flexibility in c2
- Long-term distribution between two radio stations or cities expected to be fifty-fifty
- Flowchart symmetries support this distribution
- Plan to interpret the flowchart further
- Initial state can be set to 100% K-pop
- Initial state can be set to 0% Ska, 100% K-pop
- Dot product used to calculate transition probabilities
- Two-thirds probability for K-pop to K-pop transition
- One-third probability for K-pop to Ska transition
- Problem formulation can be approached through rights or lefts
- Discussion on left formulation and its agreement
- Mention of the need for probabilities to sum to one
- Intuitive approach suggested for averaging after hearing a 'k'
## Upcoming Topics and Clarifications
- Change in values mentioned, further clarification needed
- Plan to discuss left versus right formulation
- Taylor series to be covered in the next session
- Matrices topic concluded for now
# 6-26-25
## Project Updates
- Meeting has started
- Two projects posted: matrix multiplication and image compression through eigen information
- Projects available on the website with a deliverable checklist
- Projects aim to build a larger code base with generative AI to solve real-world problems
- Projects include a one-page reflection on accomplishments and learnings
- A five to seven minute video is required to demonstrate the code base and explain its components
## Deliverable Considerations
- Concerns about the ease of submitting work to generative AI without understanding the process
- Emphasis on ensuring students gain knowledge from the projects
- Video format is considered the easiest way to demonstrate understanding
- Consideration of different media formats for project deliverables
- Discussion on traditional exam methods like blue books
- Companies producing blue books are seeing increased purchases as traditional exam methods are revisited
- Students are encouraged to explore both projects but focus on matrix multiplication
## Future Discussions
- Codes are posted on high TA for reference
- Discussion planned for Monday on deliverables and capabilities
## Technical Enhancements
- Python ports have been made for all projects to ensure compatibility
## Submission Guidelines
- Submission zone will accept multiple submissions including handwritten and code materials
- Option to upload handwritten responses and built codes
- Submission zone will accept zip files for convenience
## Taylor Series Discussion
- Files should be named in order with a simple structure like '01.name', '02.name' for easy reference
- Question 5 on the current homework assignment is designated for group work
- Introduction to Taylor series will begin today
- Two dimensional Taylor series might be briefly touched on today
- Helper materials for two dimensional Taylor series will be provided by Monday
- Continuation of Taylor series discussion planned for today
## Probability Transition Matrix
- Discussion on probability transition matrix and its symmetric properties
- Discussion on regular stochastic matrix problem setup
- A matrix acting on an initial probability gives rise to a new state in a discrete time process
- A matrix is used to represent transitions between states, with rows and columns labeled for each state
- Vector representation includes percentage on k and percentage on s
- Suggestion to visualize the vector representation with a clock analogy
- Quick back of the envelope check suggested for matrix setup
- Decision to discuss probability vector in detail
- Task assigned to populate the matrix based on the given statement
- Transition from state j to state i is the typical formulation
- Additional 30 seconds allocated for discussion
## Matrix Value Placement
- Discussion on placing values in the probability transition matrix, specifically for state k to state k
- Emphasis on avoiding common mistakes like flipping values
- Discussion on flipping diagonal elements for left vs right stochastic matrices
- Placement of one half and one fourth in the matrix for state transitions
- Clarification on transition probability: first row, second column should be transition to state k from state s, with a value of one half
- Columns in the left stochastic matrix add up to one
- Transition from state k to state s should have a value of one fourth
- Basis vectors used for interpretation of matrix representation
- Matrix columns add up to one, indicating a regular stochastic matrix
- All matrix entries are non-negative
- Application of matrix to standard basis I hat discussed
## Initial State and Transition Predictions
- Initial state vector (I hat) represents the initial configuration of radio listening
- Current configuration: All listening to k pop, none to Scott
- Application of probability transition matrix predicts future state
- Calculation: 3/4 times 1 plus 1/2 times 0 equals 3/4
- Calculation: 1/4 times 1 plus 1/2 times 0 equals 1/4
- Current state: All listening to K pop, none to Ska
- Transition to next state: 3/4 listening to K pop, 1/4 to Ska
- J hat represents an even split: 1/2 listening to K pop and 1/2 listening to Ska
- Red notes indicate potential errors in transition probability matrix setup
- Right stochastic matrix discussed, where rows sum to one
- Application of right stochastic matrix to initial state of all listening to K pop results in 3/4 in the first row
- Discussion on potential errors in transition probability matrix setup, indicated by red notes
- Verification of matrix setup by checking if a equals three fourths
## Eigenvalues and Eigenvectors Discussion
- Eigenvalues and eigenvectors are being calculated from the a matrix
- Eigenvectors are stored in the x matrix, eigenvalues in the d matrix
- MATLAB function 'eig' is used to compute eigenvalues and eigenvectors
- One eigenvalue identified as one
- Discussion on the effect of repeatedly applying the matrix to the probability vector, specifically the one quarter eigenvalue approaching zero
- Need to normalize the probability vector for accurate representation
- Probability vector formulation resulted in one third and two thirds
## Linear Algebra in Scientific Computing
- Discussion on the relevance of eigenstructure in linear algebra for scientific computing
## Code Development
- Discussion on building a code from scratch and commenting for clarity
- Educational code with comments available on HiTA and Python port for replication
## Polynomial Functions
- Discussion on scalar functions of scalar variables in mathematics
- Simple functions refer to basic mathematical functions often introduced in high school
- Introduction to polynomials as linear combinations of powers
- Focus on two simple polynomial functions for analysis
## Polynomial Visualization
- Visualization of x squared and x cubed polynomials: x squared goes up, x cubed has one side going up and one going down
- Polynomials exhibit bumpy features when more terms are added
- As x approaches infinity, polynomials diverge in positive or negative directions
- Polynomials diverge as magnitude goes to infinity
- Derivative of a polynomial results in another polynomial
- Derivative of a cubic polynomial results in a quadratic polynomial
- Polynomials differentiate to lower order polynomials until they vanish
## Oscillatory Functions
- Discussion on oscillatory functions and their non-perpetual oscillation
- Oscillations eventually diverge at the sides, not perpetual
- Hyperbolas considered in relation to oscillations
## Functions with Asymptotes
- Discussion on functions with finite vertical asymptotes and their behavior
- Functions associated with staining petri dishes and cell division
- Mention of exponential growth in relation to these functions
## Limitations and Extensions of Polynomial Models
- Discussion on limitations of polynomials in modeling certain STEM phenomena
- Inquiry on methods to incorporate other behaviors into polynomial models
## Polynomial Examples
- Computers cannot represent transcendental numbers like pi, which go on forever
## Taylor Polynomial Approximation
- Discussion on approximation by Taylor Polynomial to match functions locally
- Example given: f(x) = x^n, discussing low degree polynomials
- Discussion on nth degree Taylor Polynomial centered at x₀ = 0
- Explanation of nth degree polynomial with highest order x power being n
- Discussion on nth order polynomial with terms up to aₙ(x-x₀)ⁿ
- Key statement: Polynomial at x₀ equals the result of substituting x₀ into the polynomial
- Agreement at x₀: Evaluate polynomial at x₀, a₀ should match the function value
- Agreement at x₀: a₀ should be e^(x₀) for polynomial and its derivatives
## Derivatives of Polynomial Functions
- Derivative of pₙ at x₀: Constant term becomes zero, derivative of (x-x₀) yields a₁
- Further derivatives: Coefficient a₂ emerges, subsequent terms follow pattern
- Application of the power rule in derivatives, with x₀ substitution leading terms to zero
- Derivative process: n comes out front, reducing power by one
- Discussion on derivative process: Only a₁ remains after substitution, aligning with f'(x₀)
- Forcing the derivative of polynomial to agree with target function's derivative at x₀
- Observation of pattern formation with pₙ'' at x₀
- Second derivative: a₁ term is eliminated, resulting in 2a₂(x-x₀)
- Matching polynomial's second derivative at x₀ with the function's second derivative
- Higher order terms in derivatives reduce sequentially: n, n-1, n-2, etc.
- Substitution of x = x₀ results in terms becoming zero
## Third Derivative Discussion
- Introduction of third derivative in discussion, aiming for clarity in notes
- Plan to write a statement about the third derivative for documentation
- Taking three derivatives results in 2 * 3 * a₃ * (x-x₀), with only a₃ term remaining
- Demand for polynomial to match third derivative of function at x₀
- Choice of e^x simplifies calculation of derivatives
- Coefficients for Taylor Polynomial: a₀ = e^(x₀), a₁ = e^(x₀), a₂ = e^(x₀)/2, a₃ = e^(x₀)/(3*2)
- Observation of factorial pattern emerging in Taylor series construction
- Expression for pₙ(x): Sum from k=0 to n of aₖ(x-x₀)ᵏ
- Repeated appearance of e^(x₀) in all coefficients due to chosen function
- Connection between denominators and subscript of coefficient: 6 * 5 * 3
- Coefficient aₖ in Taylor Polynomial: Denominator is k factorial, with (x-x₀)ᵏ term
- Centering at x₀ = 0 simplifies calculations, symbolic x₀ used for generality
- McLaurin representation is the name for Taylor polynomial centered at zero
- Aim to build a polynomial that matches the function e^x
- Key point: First derivative provides geometric interpretation of function behavior
- Higher order derivatives provide geometric information: concave up, concave down, etc.
- Aim to align polynomial with function at specific point, capturing geometric hierarchy
- Collect geometric information of f at x₀ to build polynomial
- Taylor polynomials and series aim to extract essential features of a function to match data in the polynomial
## Taylor Series and Infinity
- As n approaches infinity, the Taylor series becomes exact
- Mention of series and sequences in relation to Taylor series
- Humorous remark about not admitting to letting polynomials become infinitely long
- Taylor series becomes identical to the function on its domain as n approaches infinity
- Question raised: How good is the approximation when using computers?
## Error Estimation in Polynomial Approximations
- Importance of estimating error in polynomial approximations, similar to measuring liquid in a graduated cylinder
- Maintaining awareness of information lost in polynomial representation
- Mention of Taylor's theorem as a reference for understanding timing in teaching
## Intermediate Value Theorem
- Taylor Lembrough's theorem: Assume f is an n+1 times differentiable function on interval I
- Introduction of typical notation used in class for x₀
## Greek Letters and Notation
- Discussion on Greek letter c and its pronunciation in Greek and American English
- Anecdote about math classes with multiple squiggles
- Visual description of drawing Greek letter xi: resembles Charlie Brown or Homer Simpson with a tuft of hair and a tail
- Existence of a sea between x and x₀ ensuring certain conditions hold
## Taylor Polynomial and Remainder
- Taylor polynomial equals nth degree polynomial plus Taylor remainder
- Various formulations for Taylor remainder, including Lagrange's formulation
## Taylor Series Formulation
- Formula: f^(n+1)(c)/(n!) * (x - x₀)^(n+1) where x is the independent variable and x₀ is the point of expansion
- Centering the polynomial representation at x₀
- Discussion on the notation of f function with superscript in parentheses indicating n+1 derivative
- The function f is exactly the Taylor polynomial plus an error term for some c
## Error Identification in Polynomial Approximation
- The theorem captures what is lost in polynomial approximation, but the exact size of the error is uncertain
- The location of c is within the interval, but its exact position is unknown
- Example calculation of Taylor polynomial and estimation of Taylor remainder
- Discussion on identifying the amount of information lost in polynomial approximation
## Coursework and Assignments
- Learning objectives (LOs) will involve interaction with Hatiya
- Handwritten work will be required for homeworks and projects
- Scratch work should be organized, linearized, and submitted with code
## Submission Details
- Drafts can be reviewed before final submission
- Canvas will be opened for submissions at the end of next week, with rolling submissions thereafter
## Project Requirements
- Projects may require typesetting for reflections
- Projects will have rolling due dates, feedback will be solicited before finalizing dates
## Lecture Pacing and Planning
- Discussion on the pacing of Taylor series topics, with some topics being too advanced
- Clarification on f(x) as the exact equation and P sub n as the polynomial approximation
- Discussion on taking the limit as a function approaches zero, leaving the polynomial as the function when n approaches infinity
- Estimation of the worst-case scenario for polynomial approximation error
## Taylor Series Error Analysis
- Discussion on the potential divergence of Taylor series if derivatives are ill-behaved
- Importance of the c value and derivative behavior in error term analysis
- Example replication in coding base, using MATLAB for Taylor polynomial
- Discussion on the remainder term in Taylor's theorem and its exactness
- Clarification on the remainder term being the only component on the right-hand side in Taylor's theorem
- Discussion on the magnitude of error between the function and its approximation
## Uncertainties in Derivative Behavior
- Uncertainty about the behavior of the n+1 derivative evaluated at some c, where c is between x and x naught
- Suggestion to bound the error term using a placeholder 'm' for the boxed term, multiplied by (x - x naught)^(n+1) over n factorial
- Clarification on 'm' as the maximum value of the n+1 derivative on the domain
- Estimation of an upper bound for the error in Taylor's theorem without knowing the exact c value
## Building Taylor Polynomials
- Example started with f(x) = x log x to find a third derivative
- Taylor polynomial centered at x naught equal to one to approximate f(2)
- Discussion on estimating 'm' for the remainder term in Taylor's theorem
- Discussion on using Taylor polynomial to approximate x log x at x = 2
- Requirements for building a Taylor polynomial to third order: constant term, first derivative term, second derivative term, and third derivative term
- Need three derivatives for the polynomial and one more for the remainder term
- Calculation of the derivative of x log x: f'(x) = log x + 1
- Calculation of the second derivative of x log x: f''(x) = 1/x
- Calculation of the third derivative of x log x: f'''(x) = -1/x^2
## Derivative Calculations
- Mention of the quadruple derivative for further analysis
- Calculation of f'(1) for x log x: f'(1) = 1
- Calculation of f''(1) for x log x: f''(1) = 0
- Calculation of f'''(1) for x log x: f'''(1) = -1
- Quadruple derivative not centered at x = 1, used for remainder estimate
- Explanation of third order Taylor polynomial: P_3 = f(1) + f'(1)/1! * (x - x_0)^2, with known x_0 values
- Explanation of centering the Taylor polynomial at x naught = 1
- Calculation of f(1) and f'(1) for centering, with f(1) = 0 and f'(1) = 1
- Cubic approximation to x log x: P_3 = (x - 1) + (1/2!)(x - 1)^2 - (1/3!)(x - 1)^3
- Explanation of centering at x naught = 1 as the point where the polynomial and function match exactly
- Discussion on the effectiveness of the cubic approximation slightly away from x naught = 1
- Calculation of f(2) using a pocket calculator instead of plugging into x log x
- Use of third order Taylor polynomial P_3 to approximate f(2) by plugging in x = 2
- Clarification on the sign of the third derivative term in the Taylor series, confirming it is plus f'''(x)
- Question raised on how to bound the error in the approximation of f at x = 2
- Discussion on the application of Taylor series in scientific computing and error bounding in approximation schemes
- Instructor preparing to use the projector for further explanations
## Lecture Anecdotes
- Anecdote about a chaotic lecture with overhead projectors and rapid slide changes
- Mention of a bilingual class with dialogue in Chinese
- Discussion on the challenges of bilingual classes and rapid slide changes
## Error Estimation in Taylor Polynomials
- Error estimate bounded by (x - x naught) with x naught = 1 and x = 2
- n value in the approximation is 3, confirming the use of a third order Taylor polynomial
## Error Bound Estimation
- Attempt to make an informed decision on error bound estimation using MATLAB
- Inquiry about the fourth derivative term, initially thought to be two
## MATLAB Function Definitions
- Correction on the fourth derivative term: 2/x^3
- Discussion on plotting features in MATLAB for visualization
- Explanation of defining a function with four derivatives in MATLAB using anonymous functions
## MATLAB Environment Discussion
- Discussion on MATLAB's default setting to work with matrices and vectors
- Visualization of functions as parabolas in MATLAB environment
## MATLAB Plotting Discussion
- Reminiscing about algebra class and the method of plotting points on graph paper
## MATLAB Domain Specification
- Use of linspace in MATLAB to create a linear spacing of x values for domain specification
- Specification of domain in MATLAB from 1 to 2 with 100 evenly spaced points
- Discussion on plotting input variable x and output f in MATLAB
## MATLAB Taylor Series Discussion
- Issues with Taylor series implementation in MATLAB, specifically with the fourth derivative
- Discussion on creating a table of points for plotting in MATLAB
- Clarification that x is a vector, consisting of 100 numbers, in MATLAB
- Discussion on the misconception of cubing vectors in MATLAB
- Explanation of element-wise operations in MATLAB using the dot operator for cubing vectors
## MATLAB Error Reports
- Error encountered during plotting in MATLAB related to pre-class activities
- Error due to matrix dimensions not agreeing during plotting in MATLAB
## MATLAB Taylor Series Limitations
- Discussion on the role of the fourth derivative in the Taylor remainder theorem
- Discussion on the limitations of Taylor series in identifying the exact value of c between 1 and 2
- Consideration of the worst-case scenario for the fourth derivative on the specified domain
- Inference from graph about the largest value of m, which is 1
- Fourth derivative value is 2 when m equals 1
- Known values: m = 2, x = 2, egg naught = 1
- Discussion on bounding the error in MATLAB Taylor series
- Discussion on using graphical methods to develop bounds for the fourth derivative in MATLAB
## MATLAB Function Behavior
- Observation that as x decreases towards 1, the function grows, indicating the largest value is at x = 1
- Mention of using tools to determine the remainder R3 of x at 2
- Clarification on the use of the fourth derivative in Taylor series, specifically with n = 3 and the expression involving factorial and powers
## MATLAB Taylor Series Calculations
- Magnitude of the fourth derivative is less than or equal to 2 on the domain from 1 to 2
- Calculation involves (2 - 1)^4 over 3 factorial
- Calculation confirmed: 2 divided by 3 for the fourth derivative term
## MATLAB Remainder Concerns
- Concern about the presence of a four factorial in the denominator in notes
- Mention of Taylor remainder and Lagrange remainder formation
## MATLAB Correction Notes
- Correction made: n plus one factorial should be used instead of n factorial
- Calculation detail: 1 raised to the fourth power, 2 over 4 factorial equals 1/12
## General Suggestions
- Suggestion to make a note for future reference if an issue does not occur again
- Discussion on notation for Taylor series: 'a' as the centering point (x₀) and 'x' as the estimation point
## MATLAB Visualization Plans
- Plan to leverage computer power to visualize completed Taylor work
## MATLAB Function Definition
- Defining the original function in MATLAB without using dot products
- Generating a list of input values for x and applying log x to create a new list
## MATLAB Vector Operations
- Accepting vectors as inputs without needing dots on pluses and minuses
- Element-wise operations are default for pluses and minuses in vectors
- Products require special attention when dealing with vector inputs in MATLAB
## Taylor Series Details
- Third derivative referred to as f triple p, fourth derivative as f quadruple p
- Centering point x₀ is equal to one for Taylor series
- First degree Taylor polynomial is also known as the tangent line approximation
- Second degree Taylor polynomial referred to as parabolic approximation
- Cubic approximation follows the parabolic approximation in Taylor series
- At the centering point, Taylor series approximations are exact
- Outputs include the function itself represented as a column vector
## Plotting and Approximation Details
- Plotting from zero to five with 100 points for approximation
- Capital X represents the approximation point, x = 2
- As the radius increases, the approximation near the center improves
- Convergence of Taylor series can be poor depending on the function
- Plan to plot inputs with corresponding outputs for visualization
- Use 'hold on' in MATLAB for additional graphical work
- Plot includes function and polynomial approximations up to cubic order
- Increase line width for better visibility
- Use marker size 30 for plotting specific points
## Plotting Enhancements
- Plot will include a point on the f curve at x = 2 to indicate the target
- Output from the polynomial will be plotted to compare with the target
- Use 'hold off' after plotting to stop adding to the current graph
- Add a legend to the plot for clarity
## Error Analysis and Aesthetic Improvements
- Utilize generative AI for enhancing plot aesthetics, including legend and titling
- Print absolute error and absolute percent error for approximation
- Print bound on Taylor error for analysis
- First order Taylor polynomial is flat, representing the tangent line approximation
- Point of tangency for the first order Taylor polynomial is at x = 1
## Approximation Observations
- Marker size is considered too large, appearing as a blob
- Second order approximation is not quite accurate
- Third order approximation improves curvature, described as cubic style
- Blue point represents the target value, 1.38
- Purple point represents the output value, 1.33
- Quantifiable error between third order Taylor polynomial and target function is 0.052961
- Absolute error is calculated as the absolute value of f(2) minus p
- Absolute percent error calculated as 3.30.8%
- Error less than the bound put down by Taylor
## Homework Updates
- Code with extensive comments will be posted for those who have questions
- Concise version of the code will also be posted for those who prefer it
- Homework due date moved to next Friday
- Second homework due date set for Wednesday, a week after the first homework submission
## Miscellaneous
- Casual conversation about personal plans, no relevant meeting details
## Course Material Access
- Assignments will remain accessible after the course closes
- Exporting conversation in .m format results in a PDF on Canvas
- Images in the PDF are rendered as text, not from markdown
- Obsidian is an open-source markdown viewer that allows PDF export
## Advanced Approximation Discussion
- Discussion on multivariate Taylor or quadratic approximation
- Possibility of cubic approximation with higher dimension tensor
- Discussion on third order derivatives from multivariate functions using tensorial notation
- Multi-index Taylor notation is convoluted but effective
## Arbitrary Approximation Goals
- Discussion on the goal of achieving arbitrary approximation
- Discussion on the value of code for cubic multivariate polynomial approximation
## Academic Publication Discussion
- Discussion on the effectiveness of quadratic and cubic terms in polynomial approximation
- Discussion on the novelty required for academic publication in mathematics
## Computational Efficiency and Modeling
- Discussion on efficiency improvements in computational sciences as a publishable topic
- Mention of a published article on modeling a zipline, potentially relevant to current discussions
- Discussion on the importance of applying mathematical models to real-world scenarios for computational efficiency
## Domain Extension in Approximation
- Discussion on extending the domain of approximation by adding more terms or re-centering Taylor series
## Slope Fields and Tangent Lines in Approximation
- Discussion on using slope fields and tangent lines in approximation, similar to techniques in computer graphics
## Visual Techniques in Approximation
- Mention of using squinting as a technique in approximation, potentially related to visual perception in modeling
- Mention of a gambling matrix in relation to visual techniques
## Relevance to Other Disciplines
- Mention of capturing words on the backboard during discussions
## Feasibility of Mathematical Models
- Discussion on the feasibility of certain mathematical models being achievable due to their simplicity
- Discussion on the feasibility of certain mathematical models being achievable due to their simplicity
## Goal Determination in Mathematical Models
- Discussion on the goal of determining relationships between two points
- Question raised about the requirement of this topic for physics
# 6-30-25
## Topics
- Course logistics and administration updates
- Homework 1 adjustments and project allocation
- Introduction to Taylor series (concept, coding implementation)
- Finite difference approximations via Taylor expansion
## Key Notes
### Course Logistics
- Issue persists with ‘567 not loading with AI’; instructor plans workflow streamlining to reduce manual switches.
- Homework 1 Problem 5 is being moved to a **project** due to its length (15–17 pages when processed as a project).
- Homework due date clarified: **Wednesday by 11:59 PM**, Canvas will accept multiple uploads, PDFs, and zipped code.
- Instructor is transitioning away from Canvas “activities” in favor of direct instructions embedded in lecture pictures to streamline work across courses.
### Project Updates
- Six draft projects are being prepared for proofing today; Homework 2 will drop tomorrow.
- Projects will integrate multivariable Taylor series, eigenvalue/eigenvector computations, and scientific visualization.
### LOs/LRs Grading Workflow
- Generative AI assessed recent LOs/LRs based on prompt counts, word counts, and completion detection.
- Errors in automated grading are being surveyed; students should report mismatches via the survey linked in announcements.
- First week’s LOs/LRs will be marked as complete for submissions regardless of automated detection issues.
---
## Lecture Content
### Taylor Series: Conceptual Introduction
- Defined Taylor series as an infinite sum of polynomial terms representing a function.
- **Transcendental nature**: True Taylor series require the infinite sum; Taylor polynomials truncate at finite $n$.
- Introduced remainder term as the difference between the polynomial approximation and true function value.
### Coding Implementation (MATLAB)
- **Example:** Exponential function $f(x)=e^x$, centered at $x_0=0$ (Maclaurin series)
- Developed code to:
- Define $x$ as a vector input using `linspace`.
- Initialize $y$ as a zero vector matching the size of $x$.
- Loop from $k=0$ to `Taylor_upper` adding each term $\frac{x^k}{k!}$.
- Discussed use of dot operators (`.*`, `.^`) for vectorized computation.
- Introduced anonymous function handles for Taylor coefficients for modularity.
### Sin(x) Series Considerations
- Discussed odd-only summation for $\sin(x)$ Maclaurin series:
- Required zeroing coefficients for even indices using logical checks with `mod(k,2)`.
- Implemented sign alternation via $(-1)^{(k-1)/2}$ for odd indices.
---
### Numerical Differentiation via Taylor Expansion
- Introduced $\Delta x = h$ as step size.
- Derived forward difference approximation:
f′(x0)≈f(x0+h)−f(x0)hf'(x_0) \approx \frac{f(x_0 + h) - f(x_0)}{h}
- Noted:
- First-order accurate (truncates after linear term; error is $O(h)$).
- Approximations trade off between **truncation error** (from series truncation) and **round-off error** (from machine precision).
- Students questioned:
- How H affects accuracy: smaller $h$ reduces truncation error but may increase round-off error.
- Differences between finite difference and symbolic derivatives: finite differences approximate using function values; symbolic differentiation yields exact expressions.
---
## Implementation Reflection
- Emphasis on automating Taylor expansion approximations in code rather than hard-coding known series.
- Previewed transition to constructing finite difference formulae for numerical derivatives in upcoming sessions.
---
# 7-1-25
## Project Updates
- Two more projects have been posted
- Review of project formats is planned
- Bullet points will be added to describe project contents
- Canvas will be used for project submissions starting next Monday
- Feedback and completion status will be provided based on submissions
## Feedback Process
- Audio transcriptions will be used for quicker feedback
## Homework Updates
- Homework number one is due tomorrow
- Problem five moved to a project due to length
- Homework two will be posted tomorrow
## Lecture Updates
- Lecture objectives are slightly behind schedule
- Lecture objectives will be synced with Canvas
- Efforts are being made to catch up on lecture objectives
## Current Lecture Topics
- Discussion on finite difference approximations to derivatives
- Upcoming topic: truncation versus round-off errors
- Introduction to calculus earlier in the course than usual
- Introduction to Taylor series in the context of scientific computing
- Discussion on constructing derivatives in a computational environment
- Further exploration of Taylor series for additional insights
## Lecture Materials
- Discussion on constructing Taylor series centered at x₀ = 1/2 instead of 0
- Markdown file in lecture picture category contains instructions
## Lecture Activities
- Centering Taylor series at points other than zero or negative numbers is recommended
- Developers updated the site to make the green check box functional
## Teaching Assistant Tools
- Use high TA to learn material, green check mark testing is optional
- Feedback on green check mark testing will be provided to administrators
## Upcoming Meetings
- Last meeting scheduled for Thursday, 07/11
## Additional Insights
- Detailed breakdown of Taylor series using order notation
- Explanation of Taylor series terms stopping at order x to the fourth without using summation
- Highest power index in Taylor series indicates the limit of terms discussed
- Taylor series relies on derivatives and plugging them into the equation
## Historical Context
- Historical context: Early radar development involved solving complex partial differential equations
- Exotic functions emerged from manual solutions to these equations
## Computational Methods
- Truncated Taylor series used as an alternative to exotic functions in early computational algorithms
- Taylor series allowed calculations with limited computational power, such as with pocket calculators
## Taylor Series Expansion
- Isolating derivatives to create formulas for approximate derivatives using the original function
- Expression involves f(x₀) + f'(x₀)h ± f''(x₀)/2! h², indicating the use of Taylor series expansion
- Inclusion of f'''(x₀)/3! h³ term in Taylor series, with alternating signs for ±h
- Big O notation indicates error term of order h⁴
- f'(x₀) = (f(x₀ + h) - f(x₀))/h + O(h)
- Plus h for the first term, minus h for the second term, subtract the two, divide by 2h, resulting in Big O(h²)
- Discussion on higher order derivatives and their importance in life applications
- Explanation of truncation step in obtaining approximations from Taylor series
## Finite Difference Formulas
- Finite difference formulas are used with truncation order to approximate derivatives
- First order finite difference formula involves truncating at first order in h (delta x, step size)
- Second order finite difference formula is used for the first derivative
## Advanced Derivative Concepts
- Discussion on third derivatives and their role in Taylor approximation for any function
- Emphasis on calculating derivatives without prior knowledge of the function's derivative structure
- Algebraic methods to isolate f'(x₀) and f''(x₀) using truncation
## Truncation Errors
- Discussion on first and second order truncation errors in Taylor series
- Different truncation methods based on the extent of Taylor series used
- h is described as Delta x, providing a standard way to look away from x₀
## Variable Usage in Computations
- Discussion on using h as a variable for limits, avoiding delta h's or delta x's in computations
- Clarification on third order division by 3h or 2h²
## Manipulation of Taylor Series
- Discussion on different powers of h in the denominator based on Taylor series manipulation
## Visual Aids
- Instructor plans to show a partial picture related to the topic
- Instructor attempts to find and zoom into a specific region of a picture for better illustration
- Instructor discusses the importance of delta x being small and its relation to truncation
- Discussion on scaling of first and second order truncation errors with decreasing delta x or h
## Error Reduction and Visual Challenges
- Higher order truncation decreases error faster with smaller mesh sizes
- Instructor prepared a graph to show in two parts but is unable to find the file
## Error Analysis
- Statement of error: difference between derivative and finite difference approximation
- Explanation of log-log plots with logarithmic scaling on both axes
- Explanation of how power functions appear as lines on log-log plots, with slopes indicating the power (e.g., x² appears as a line with slope 2)
## Truncation Error Comparison
- Second order truncation error is more accurate for a given h value compared to first order
- As h decreases, second order truncation error decreases faster than first order
## Higher Order Terms in Series
- Retained terms up to big O(h^4) or O(h^3) in the series
- Discussion on how h^3 line should appear below blue line on graph, indicating steeper descent
## Formula Validity Concerns
- Instructor mentions that making h too large can invalidate the formulas
## Graphical Analysis
- Making h small is generally beneficial for accuracy
- On log-log plots, different methods may converge at a common origin
- As h approaches zero, everything should become exact, but this is not reflected on the current graph
## Additional Error Types
- Discussion on other types of errors not currently visible on the graph
## Finite Difference Approximation
- Finding a second order finite difference approximation for s''(x₀)
## Instructional Techniques
- Instructor suggests mentally and physically juggling terms for better understanding in upcoming examples
- Two-minute pause for students to think about term derivation
## Data Integrity Issues
- Instructor mentions potential data loss during analysis
## Second Order Approximations
- Instructor is searching for a second order finite difference approximation for f''(x)
- Solving for f''(x₀) with a second order truncation error involves dividing by h²
- Dividing h to the fourth by h squared results in a big O of h squared
- Truncation error is determined by the power of h in the big O term
- Subtracting known formulas at plus h and minus h cancels even terms, isolating f prime
- Adding terms for plus h and minus h shows changes in the first term f(x₀)
## Combining Terms in Approximations
- Retaining two f(x₀) terms for calculation
- f' terms cancel out when adding plus h and minus h
- f'' terms have the same sign and add up to two when combined
- f''' terms cancel out when adding plus h and minus h variants
- Solving for f''(x₀) involves subtracting 2f(x₀) from f(x₀ + h) and f(x₀ - h)
- Dividing by h squared reduces the power of h to h squared, affecting truncation
## Taylor Series and Approximations
- Taylor series approximation leads to a second order finite difference approximation for f''(x₀)
- Taylor series is centered at x₀
- Symmetrical points around x₀ are considered: plus h and minus h
- Forward difference involves looking ahead of x₀ by adding h
- Instructor discusses the structure of finite difference approximation
## Third Order Approximations
- Instructor plans to derive a third order approximation next
- Extracting the third derivative is the current focus
- Acknowledgment that similar work has been done by others
- Instructor aims to combine elements to derive f''' or f''''
- Process involves careful combination of terms to achieve desired derivative
## Extracting Derivatives from Taylor Series
- Taylor series contains all necessary derivatives for approximations
- Focus is on extracting specific derivatives rather than all information
## Instructor's Plans
- Instructor plans to keep certain pieces visible for future reference
## Developing New Formulas
- Developing formulas for f(x₀) with plus or minus two h's
- Incorporating plus or minus two h's into f(x₀) affects the structure of the series
- The squaring of terms affects the plus or minus structure in the series
- Squaring terms in the Taylor formula affects the plus or minus structure, resulting in h squared and a factor of four
- Squaring leads to the inclusion of f'''(x₀) in the formula
- Cubing terms in the Taylor formula results in a factor of eight
- Exhausted possibilities from manipulating x₀ plus and minus h
## Higher Order Derivatives
- Considering order, adding h to the fourth for constructing other derivatives
- Start adding further h contribution after using up terms
- Gathering more data by looking one more click away in h
## Coefficient Combinations
- Instructor proposes using a combination of coefficients (a, b, c, d, e) with f(-2), f(-1), f(0), f(1), f(2)
- Placeholder notation for f(x₀) minus two h's is being used
- Discussion on the number of h's in f(x₀) minus two h's, minus one h, plus one h, and plus two h's
- Using capital letters A, B, C, D, E for coefficient selection to achieve desired triple derivative
- Shorthand notation allows for efficient term selection in f(x₀) representation
- f(x₀) remains consistent across different formulas when adjusting by plus or minus h
- Factoring off f(x₀) results in different terms: a, b, c, d, e
- Changes expected in the structure, increasing complexity
## Interpretation of Notations
- Discussion on the meaning of f'₀ in the current context
- f'₀ is interpreted as f'(x₀)
- Grouping all f' terms results in negative two times a, leading to negative two h
- All f' terms should have an h multiplier at the end
- f(-1) results in a negative b when considering the formula
- f₀ is interpreted as f(f(x₀)), resulting in zero contribution
- No 'c' term involved in f₀, only in specific terms
- f₁ contributes a 'd' term
- f₂ contributes a term
- Plus two e and then all terms are multiplied by h
## Contributions of f₀''
- f₀'' contributes h squared to the formula
- Evenness of terms results in symmetry, making negative and positive contributions equivalent
- Calculation includes a factor of four over two factorial, simplifying to two a
- Factoring off f''(x₀) and h squared leaves behind 1/2 b
- No f(x₀) involved in f''(x₀) terms
## Simplifications and Factorials
- D over two and two two e are part of the formulation
- f₀''' contributes h cubed to the formula
- Simplification of 4f₋₂ results in negative four thirds, associated with letter A
- Next term involves negative one over three factorial
- Times b, minimal c, plus d over three factorial, plus four thirds e
- Searching for f triple prime, requires dividing everything by three factorial
## Truncation Errors and Orders
- Dividing by h cubed results in first order truncation error
- Writing out one more step to catch fourth terms
- Big O notation for h to the fifth for second order
## Fourth Derivative Discussion
- Discussion on the fourth derivative and its independence from plus-minus structure
- Raising 2h to the fourth power for fourth derivatives results in h to the fourth
- h to the fourth power and two to the fourth due to the 2h part, divided by four factorial
- f₋₁ does not require consideration of plus or minus in even order
- h raised to the fourth power without plus or minus consideration
## Additional Simplifications
- Minimal c, d over four factorial, plus two to the fourth over four factorial
- Big O notation for h to the fifth
## Problem Solving Approach
- Discussion on the approach to solving the problem using algebra
- Choose a, b, d, and e so that the parenthesied quantity is zero
- Set the parenthesied quantity to zero, then to one, and back to zero
- Five unknowns: a, b, c, d, e
- Five equations available to solve for the unknowns
- a, b, c, d, and e are raised to the power of one and added together
## Coefficient Matrix Details
- Suggestion to consider the powers of a, b, c, d, and e in the formulation
- Set up a coefficient matrix for a, b, c, d, and e
- First equation needs to be set to a specific value
- Second, third, and fifth equations need to be set to zero
- Fourth equation needs to be set to one
- Coefficient on a, b, c, and e is one
- Next equation: negative two, negative, zero, two
- Third equation: two, one half, zero, one half, two
- Next row: negative one third, one third, one zero, and one third and four thirds
- Last row simplifications: two to the fourth over four factorial, one over four factorial, zero
- Discussion on factorials and their representation in matrix problems
## Matrix Scaling and Methods
- Transition from two by two to five by five matrix
- Consider scaling due to square matrix and vector with five elements
- Method of undetermined coefficients applied to finite difference
- Five by five matrix due to five data points and five coefficient equations
## Truncation Error and Accuracy
- Fifth equation necessary for solving batch due to truncation error
- Truncation error requires extending to fifth order
- Two methods for achieving first order accuracy and beyond
## Data Requirements for Accuracy
- Minimum data required for third order accuracy discussed
## MATLAB and Code Utilization
- MATLAB file named 'f d matrix Helper' is used for solving matrix problems
- FD finite difference matrix helper dot p y code is ready for execution
- First row of the matrix consists of ones
## Solving Equations and Derivatives
- Solving the equation a times x equals b to find x
- Determining values of a, b, c, d, and e for third derivative
- In MATLAB, the x vector is equal to a, with orientation from upper left to lower right b
- Calculating the determinant of a five by five matrix is costly on a computer
- Determinant of the matrix is one, indicating a unique solution exists
- Solution from matrix calculation: a = -1/2, b = 1, c = 0, d = -1, e = 1/2
- Coefficient for f triple prime at x naught is one, divided by h cubed
## Derivative Calculations
- Zero derivative calculation involves a = -1/2 and b = 1 for f of x naught minus two h's and f of x naught minus h
- Plus no minus f of x naught plus h, plus one half f of x naught plus two h's, all over h cubed
- Plus order h to the fifth, divided by h cubed, resulting in h squared
- More data points needed to capture desired derivative accurately
## Linear Algebra for Coefficient Calculation
- Use linear algebra to write coefficients as a linear system for triple derivative
- Remove unwanted terms to isolate desired term in the system
## Meeting Adjournment
- Meeting adjourned for a short break
## Task Assignments
- Discussion on whether additional tasks will be assigned
## Miscellaneous
- Discussion on naming suggestions for a new dog
- Suggestion to name the new dog 'Truncation Error'
## Software Tools
- Discussion on the use of MATLAB for handling imaginary numbers and rotations
- Unrelated conversation, no relevant details to add
## File Management
- Discussion on file management and organization
## Documentation
- Mention of grouping all sightings for documentation
## Mathematical Discussion
- Explanation of terms: a = -2h, b = -1h, d = +1h, e = +2h, c = f(x₀)
- Discussion on the role of f(x₀) in equations and its relation to other terms
## Miscellaneous
- Mention of gambling without money involved
## Live Demonstrations
- Attempt to apply finite differences in a live demonstration
- Attempt to construct Taylor series approximations for an arbitrary function using finite difference formulas
- Discussion on using MATLAB for numerical formulas with sine x
- Discussion on differentiating functions using finite difference formulas
- Mention of central difference setup for first derivative at second order
## Function Analysis
- Explanation of encapsulating f(x-h) in parentheses and dividing by 2h
- Description of the anonymous function handle 'd f' requiring a function, a point, and h size
## Derivative Derivations
- Derivation of the second derivative: f(x + h) - 2f(x) + f(x - h)
## Expression Simplification
- Mention of overuse of parentheses in live settings
- Introduction of third derivative concept
- Discussion on simplifying expressions by moving constants into the denominator
- Simplification by adding 2f(x-h) - 2f(x+h) + f(x+2h) and dividing by 2
## Expression Concerns
- Mention of h cube and tube in the denominator
- Concerns about parenthesis work and autocompletion
## Calculation Parameters
- Set n equal to 200 for calculations
- X axis defined from negative pi to pi
- Linearly space from -π to π in MATLAB and slice into 200 points
- Set x₀ to 0 as the centering point
- Set h to 0.1
## Finite Difference Approximations
- Formulas are second order accurate finite difference approximations to derivatives
- Three second order accurate finite difference approximations available
- Largest Taylor polynomial constructible is third degree
- Domain for Taylor polynomial defined by x variable
- x₀ represents the centering point for Taylor polynomial
- Building Taylor polynomial with y = f(x₀) + (x - x₀)^1
- Discussion on language for f prime in Taylor polynomial
- DF is the derivative of f, centered around x₀
- Importance of sending x₀ and h for derivative calculations
- x₀ is a scalar and the centering point of zero
- In MATLAB, x is treated as a vector for element-wise computation
- MATLAB performs element-wise subtraction of x₀ from each element in the vector
- Scalar represents the derivative of f at x₀ with step size h
## Taylor Series Application
- Scalar should be applied to x - x₀, which is a vector, using dot multiplication
- Add 0.5 times the second derivative to the expression, using f(x₀) and h
- Square each element in x - x₀ using dot caret for element-wise operation in MATLAB
- Add third term to expression: 1/3! (one divided by six)
## Third Derivative Application
- Third derivative denoted as d three f, applied with f, x₀, and h
- Dot multiplication applied to x - x₀, raised to the power of three
## Numerical Differentiation and Taylor Series
- Numerical differentiation formulas are key components
- Constructing Taylor series through numerical differentiation
- Plotting the original function using x values and f(x)
- Retaining more terms in the Taylor series improves truncation error
## Numerical Approximations
- Higher order Taylor series terms are coupled with finite difference approximation
- Parenthesis error encountered during execution
- No legends in plots, but cubic and sine functions are distinguishable
## Domain and Code Verification
- Consider expanding the domain to observe sine function oscillations
- Verify if third order Taylor polynomial codes are on the lecture codes section
## Finite Difference Methods
- Discussion on transitioning to the next topic or section
- Python and MATLAB have built-in functions for finite difference approximation of a certain order
- Central difference method involves symmetric selection of neighboring points
- High order derivatives can be approximated even with limited domain data
- Back differences and center differences discussed for various derivative orders
- Language used: 'blank order accurate approximation to the blank derivative'
- Transition from symbolic derivatives to numerical approximations using known formulas
## Experimental Calculations
- Attempting to use exponential of the sine function in calculations
- Discussion on the relationship between derivatives and their applications
- Increasing degrees of freedom requires expanding data range from center point
## Finite Difference Tutorial
- Issues with vector-related code causing unexpected behavior
- Consider using built-in functions to avoid these issues
- Brief pause in discussion to adjust line 12 in code
- First order derivative being tested, unsure of impact
- Finite Difference Tutorial: Truncation versus Roundoff available on the website under intro to scientific computing
- Derivative formulas available in both MATLAB and Python for porting purposes
## Graph Replication and Accuracy
- Key graph replication possible from code windows, not shared in code base due to narrative context
- Second order accurate finite differences expected with step size h
## Error Analysis and Expectations
- Expectation of less absolute percent error as step size decreases
- Higher order accurate finite differences expected to outperform first order
- Avoid analyzing data to the left of a certain point in the formula
## Truncation Error and Accuracy
- As h approaches zero, the approximation becomes 100% accurate and truncation error goes to zero
- Third derivative representation becomes exact as h decreases
- Improvement in accuracy observed as h decreases, ignoring data to the left of a certain point
- Achieved absolute percent error of 10^-10 with h = 10^-2, indicating high precision
- As h becomes very small, f(x₀ - h) and f(x₀ + h) become approximately equal, affecting subtraction results
## Precision Challenges
- Subtractions approaching zero due to finite precision, leading to potential catastrophic loss of precision
- Division by increasingly smaller h values causing calculations to appear larger
- Making h very small can cause formulas to become ill-behaved due to finite precision issues
- Subtracting nearly equal numbers in MATLAB or Python can lead to loss of significance due to differences in decimal places
- Round off error caused by subtraction of nearly equal numbers and division by numbers close to zero
- Long-term solution isn't just reducing h indefinitely for better accuracy due to truncation limits
- Round off error increases when working on calculators as h becomes very small
- Computers use a machine number line due to limited bits, unlike the infinite number line in our minds
- Round off errors occur due to gaps in the computer's number line, noticeable during subtraction of similar numbers
- Floating point arithmetic is prone to round off errors, to be discussed next time
## IEEE Standards in Programming
- Preexisting command models available: iv, id, g, with matrix input
- Discussion on eigenvectors and eigenvalues in relation to matrix outputs
## Second Order Accuracy Issues
- Discussion on correspondence for accuracy improvement
- Round off errors appear sooner with higher order methods due to smaller step sizes
- Discussion on parameters affecting higher order derivative accuracy: age and truncation origin
## Derivative Analysis
- Mapping through h on the graph to analyze first derivative for different truncations
- Investigating the role of higher order derivatives in relation to truncations
- Third derivative accuracy is influenced by truncation level and order of accuracy
- Second order accuracy issues observed when h is too low, leading to instability and unreliable data
## Optimal h Determination
- Formal mathematics available online for optimal h determination
- Observations show improvements in results as h is adjusted
- Sudden changes in results observed after certain adjustments
- Python and MATLAB implementations adhere to IEEE standards for arithmetic operations
- Libraries exist for infinite precision, but they slow down computations when more decimal places are requested
- Internal programming can store additional decimal places, but it affects performance
## Machine Number Line and Precision
- Oscillations observed on the round off error side, potentially due to finite precision issues
- Symbolic computation aims for 100% precision, but requires significant RAM for high decimal places
- Built-in libraries prioritize accuracy over speed
- Discussion on finite precision arithmetic in relation to decimal approximation and Taylor series
- Taylor series are fundamental for deriving functions, especially with pocket calculators
- Taylor series concepts will recur in various settings, emphasizing the importance of remainders
## Software Implementation
- Decision to remove unnecessary visual aids from the presentation
- Porting code between Python and MATLAB to maintain dual compatibility
- Discussion on code base management and decision-making processes
- Discussion on typical practices in scientific computing
- Discussion on routines and structures in scientific computing
- Investigating context and field for further insights
- Investigating context and field for further insights
## Team Feedback
- Positive feedback on internal team morale
- Discussion on the impact of linear approximations on internal processes
- Discussion on the impact of linear approximations on internal processes
- Concerns raised about efficiency costs in current processes
## Sequence Analysis
- Task assignment for first and second room completion
- Ensured first derivative calculations were accurate
- Discussion on the sequence of numbers and their implications
- Discussion on the transition from five to six equations in the analysis
## Handling Unknowns
- Discussion on methods to handle unknowns in equations
- Discussion on the transition from five to six equations in the analysis
- Discussion on zero, one, two, three, and five in sequence analysis
- Discussion on the transition from six equations to eleven, three in sequence analysis
## Sequence Complexity
- Discussion on potential complexities in sequence analysis
- Discussion on the relationship between accuracy and the number of terms in sequence analysis
- Discussion on isolating grade in the context of five equations
- Discussion on the implications of five equations in current analysis
## Traffic Management
- Discussion on strategies for managing traffic effectively
# 7-2-25
## Grading Breakdown
- Homework contributes 80% to the final grade
- A- starts at 90 and goes up to 100
- Projects have two parts: initial build and additional extensions
- Two-part projects have a total of 20 points, with each part worth 2 points
- A category starts at 93
## Homework Deadlines
- Homework 1 is due tonight
- Homework 2 is posted on the website and is due next Wednesday
## Project Updates
- More projects were released yesterday, with additional ones expected soon
## Class Meetings
- Homework problems were trimmed down for clarity
- Lecture objectives are available in the lecture pictures file
- Lecture pictures have the same instructions as reflection activities
- Lecture objectives were updated last night
- Due dates will be set right before the class to identify waived assignments
- LO (Learning Objectives) instructions will be streamlined
- Today is the seventh meeting of the class
## Numerical Methods
- Discussion on rounding off versus truncation methods
- Discussion on modern programming languages for handling matrices and vectors
- Introduction to Taylor series as a key topic
- Developing numerical calculus using Taylor series
- Using Taylor series to create approximate definitions for derivatives on computers
- Plan to explore the next topic after derivatives soon
- Second order accurate approximation to the first derivative discussed
- Second order accurate approximation to the second derivative discussed
## Coding Experiences
- Encountered an issue with code running inconsistently, resolved without clear changes
- Experience shared about MATLAB code behaving unexpectedly
- Cleared MATLAB memory resolved graph inconsistency issue due to cross-wired memory spaces
- Discussion on nth order accurate finite difference approximation
## Higher Order Approximations
- Second order accurate approximation to the third derivative discussed
- Clarification on language used in discussing approximations
- Explanation of finite difference as subtractions approximating derivatives
- Discussion on nth derivative and its relevance in language
- Introduction to the concept of order accuracy in finite difference approximation
## Finite Difference Approximations
- Discussion on the role of the big O term in finite difference approximations
- Explanation of how the h in the big O term is raised to the power of p
- Description of pth order accurate finite difference approximation as a truncated formula
- Discussion on achieving pth order accuracy in finite difference approximations by manipulating terms
- Clarification on pth order formula truncated at h to the p for nth derivative at x naught
- Explanation of using k as an integer in finite difference approximations
## Taylor Series Applications
- Discussion on isolating f soup n using Taylor series and zeroing out equations
## Equation Control
- Need to gather more points using plus or minus k for accessing derivatives in Taylor series
- Process involves zeroing out unnecessary terms in Taylor series
- Need more points for controlling equations by looking at f of x not and points away from it
## Current Session Discussions
- Discussion on the V-shaped graph from last session and its implications
- Mention of upcoming homework due today and its potential impact on student engagement
- Discussion on splitting up files in computer systems
- Plan to rewrite code to separate anonymous functions into their own files for better accessibility
- Agreement to spend 15 minutes on controlling the Taylor series calculations
- Discussion on using code from yesterday to calculate Taylor series through finite difference formulas
- Performing the third degree Taylor polynomial in a separate folder for organization
- Saving work in 'Taylor live' and creating a subfolder 'seven two' for separation
## File Management and Code Documentation
- Opening a new file to explore formulas outside the current program context
- Copying existing content into the new file and converting it into comments
- Using command or control to turn lines into comments in Matlab
- Plan to implement function for first derivative using notation DF
- Discussion on the function 'diff' and its role in returning the first derivative
- Saving progress in MATLAB even when no changes are made
- MATLAB attempts to automatically name files upon saving
- Clarification on defining DF in terms of the first derivative using second order finite difference
- DF will calculate the second order accurate finite difference approximation to the first derivative
- Plan to create two more files to replicate the behavior of DF
- Opening another file to continue work on finite difference approximations
## Function Naming and Requirements
- Naming convention for second derivative function as 'd two f' to maintain consistency
- 'd two f' function requires knowledge of the incoming function, the point of interest, and the h spacing for Taylor series derivation
- Anonymous function in MATLAB used to replicate 'd two f' functionality
- Saved the function as 'd two f' for consistency
- Discussion on naming convention for third derivative function as 'd three f' for consistency
- 'd three f' function requires inputs of f, x, and h for calculation
- Saved third derivative function as 'd three f' in MATLAB, confirming it as a function
- Three m files created, each containing code to replicate anonymous functions
## Code Execution and Habits
- Discussion on defining code pieces within MATLAB and Python, and the importance of following calculations
- Preference for viewing code chunks before understanding their function
- Habit of frequently using the run button in MATLAB GUI
## File Locations and Execution
- Confirmation that the function is defined but needs to be called for execution
- Files related to TaylorSeries saved in 'seven two' directory
- Uncertainty about the location of other files, possibly in 'machine' directory
## Script Execution and Environment Management
- Running script in 'machine' directory, focusing on anonymous functions
- Commenting out certain calculations for clarity
- Clearing all to reset the environment before execution
## Anonymous Functions in MATLAB and Python
- Driver file contains finite difference formulas for use in other programs
- Discussion on anonymous functions in MATLAB and Python, emphasizing their creation within code and conversion to m files for broader use
## Transition to New Files
- Transitioning to a different set of files and initiating 'lekker'
## Browser Navigation
- Navigating to a personal site using Firefox
## Finite Difference and Code Snippets
- Discussion on finite difference truncation versus round-off errors in web space
- Code snippets available for educational purposes to recreate the discussed picture
- Key feature of the dialogue is reaching the plot near the error analysis section
## Error Analysis and Accuracy Control
- Larger p value allows for faster control of accuracy lost in truncation
- Graph shows first order accurate approximation to the first derivative
- Error decreases with known function calculation
- Second order accuracy improves error faster as step size decreases
- Blue line outperforms red in terms of absolute percent error by six orders of magnitude
- Smaller step sizes initially reduce error but eventually lead to error growth for both lines
## Number Line Details
- Increasing detail to the number line and defining numbers on it
- Hiccups occur when defining numbers on the number line
## Paradigms in Arithmetic
- Real number line is a continuum of values, emphasizing the continuum aspect
- Real number line requires infinite precision, which computers lack
- Approximating the real number line in a computational environment is challenging due to its vast data
- Precision is crucial when selecting a number on the real number line for STEM work
- Broad scale understanding is necessary for effective number line utilization
- Orders of magnitude are crucial for understanding scales from atomic nuclei (~10^-10 meters) to intergalactic distances (~10^20 meters)
- Need for dynamic range to replicate the continuum of the real number line over wide scales
- Clear quantifiable precision is essential in encoding machine numbers
## Floating Point Representation
- Floating point number representation: x = ±(1 + f) * 2^e
- Little f in floating point representation lives between zero and one, known as the significand
- Mantissa is the logarithm of the significand, providing precision in floating point representation
- Exponent e in floating point representation ranges from -1022 to 1023
- OnePlus f provides precision in floating point representation
- Exponent e provides scale in floating point representation
- As f approaches one, adding it to one approaches two, leading to an increment on the two scale
- MATLAB uses double precision format at 64 bits
- 52 bits are used for f and 11 bits for e in MATLAB's representation
- Clarification on spelling: 'precision' is spelled with one 's'
- Finite precision achieved by allocating bits to f and e
- Double precision uses 64 bits, with one bit for the sign
- Consequence of finite precision in floating point representation discussed
- Finite precision cannot encode all digits of irrational numbers like pi
- Finite precision results in gaps, leaving some real numbers inaccessible
- Infinite precision libraries in MATLAB can consume significant memory resources
## Infinite Precision and Computer Algebra Systems
- Irrational numbers have infinitely long decimal expansions without patterns
- Mathematica is a computer algebra system that prioritizes infinite precision before approximations
- Infinite precision allows for dictating precision level but increases memory and computation costs
- Finite precision results in gaps, leaving some real numbers inaccessible
## Gaps and Scale in Number Representation
- Discussed the concept of gaps in number representation and their implications
- Introduced the concept of scale using powers of two: 2^0 = 1, 2^2 = 4
- Machine numbers are placed between scales defined by powers of two, with finite precision due to 52 bits for f
- Number of machine numbers depends on the number of bits available
- One plus f where f is restricted and has 52 bits
- Machine numbers exist between orders of magnitude, influenced by available bits for finite precision
- Separation in terms of order by two exponents using the same number of bits for one plus f
- Total number of representable values remains constant across lines separated by scaling factors
- Discussed powers of two: 2^3 = 8, 2^5 = 32
- Explored the width of chunks on the machine number line
- Density of machine numbers varies across chunks of the machine number line
- Gaps in the machine number line are not uniform
- Larger gaps occur when fitting the same amount of machine numbers in a wider space
- As the exponent increases, the width of the space for machine numbers increases, leading to larger gaps
- Density of machine numbers is uniform and more dense around zero
- Precision discussion involves zero, one, three, and four as reference points
- Widths increase at larger scales while maintaining the same number of points
- Lecture notes used two steps in scale, maintaining point count
- Uniformity is a key point in machine number representation
- Challenges arise when implementing number representation on computers due to non-uniformity
- Computers approximate mathematical numbers to the nearest representable value due to non-uniform gaps
- Mathematical numbers are approximated to the nearest representable machine number
- Computers map outputs to the nearest machine number due to finite precision
- Round off error occurs when landing in a gap and being forced onto a machine number
- Round off error is a result of non-uniform scaling in machine number representation
## Real-World Implications of Number Representation
- Discussion on thought processes occurring in real-world scenarios
- Searching for a link using email and the keyword 'Mario'
## Bit Allocation in Floating Point Representation
- Posting lecture notes on machine representation of numbers to the web
- Consideration of bits, unsigned bits, and unsigned shorts in machine representation
- Discussion on floats and double precision in machine representation
- Discussion on the allocation of bits for signed bit, mantissa, and exponent in floating point representation
- Density of machine numbers decreases as you move away from zero
- Gaps in machine numbers can be large, affecting calculations like Mario's jump in a game
- Video discussion resumed, ensuring no repetition of previous points
## Trade-offs in Scale Selection
- Discussion on space games exploring interstellar phenomena and encountering gaps in representation
- Variability in gaps is due to maintaining scale while preserving decimal precision
- Choosing a certain scale allows for reclaiming bits, offering more precision per scale
- Trade-offs exist when selecting scales in number representation
## Gapping in Physics Simulations
- Demonstration of ragdoll physics with polygon bodies near and away from the origin
- Comparison between single precision and double precision in relation to gapping
- Close to the origin, the machine number line is dense with minimal gapping
- At 10 kilometers from the origin, objects experience less normal bouncing due to gapping
- At 50 kilometers, different parts of objects experience different regions of gap
## Precision and Representation Adjustments
- Discussion on abnormal twitching due to lack of programmed feelings
- Increase to double precision for more bits in representation
- Use of double precision for simulations further from the origin to ensure proper simulation
- Typical problems in number representation discussed
- Typical problems in number representation reiterated
- Consideration of graphs in relation to number representation
- Confirmation of dialogues covered in previous discussions
- Submission of dialogues with the zip file of the code is considered complete
## Scale and Precision Challenges
- Anna asked about a previously erased bullet point related to a question from her
- Consequences of non-uniform gaps in machine number line when working with finite precision arithmetic
- Gaps between machine numbers increase with scale, affecting precision
- Double precision helps maintain good precision despite scale
- Code used to identify issues with Mario's jump at scale 64
## Round Off Errors in Number Representation
- Round off error occurs when hitting a number that must be represented in a limited space
- Discussion on the representation of the number one tenth and its implications
- Discussion on binary representation and memory requirements for repeating numbers
- Consideration of cutting off repeating bits to manage memory constraints
## Errors in Series and Representation
- Truncation error is associated with stopping the Taylor series before completion
## Managing Repeating Numbers and Errors
- Discussion on truncating repeating numbers without using 'truncation' to avoid confusion with Taylor series truncation error
- Round off error and its implications discussed with examples
## Equation Scaling and Recognition
- Discussion on scaling equations and recognizing equivalent equations through scaling
- Discussion on lines in space and their points of intersection
## Matrix and Vector Operations
- Introduction of matrix A with values (17, 5) and (1.7, 0.5)
- Discussion on formatting numbers for better readability in outputs
- Explanation of solving the equation Ax = B in MATLAB using built-in commands
- Discussion on non-unique points of intersection and verification of points on the line
- Discussion on the system having infinitely many solutions not indicated by MATLAB
- Discussion on round off error leading to error in solution and its impact on line recognition
## Machine Number Line Limitations
- Discussion on machine number line gaps and limitations in representation
## Computational Environment Concerns
- Advice on when to be concerned about computational environment issues
## Precision and Subtraction Issues
- Discussion on subtraction of nearly equal numbers leading to zeroing out due to precision limitations
## Numerical Interpretation Issues
- Discussion on significant zeros zeroing out leading to machine interpreting the number as zero
- Discussion on adding and subtracting numbers on very different scales
- Analogy of Mario jumping over a gap illustrating machine number rounding issues
- Division by very small numbers can lead to overflow and machine interpreting it as infinity
## General Advice
- Reminder to be cautious of three key issues in computations
## Taylor Series and Derivatives
- Definition of the derivative as represented by a Taylor series
- Truncating the Taylor series to get an approximation
- Error arises from truncating the Taylor series, affecting accuracy
- For the Taylor series to approximate a derivative, h must be small
- Issues arise when x naught is large and a small number is added, leading to potential precision problems
## Numerical Derivative Issues
- When h is small, f(x naught + small) and f(x naught) may appear similar, causing precision issues
- Division by a very small h can lead to numerical derivative issues
- Problem two for small h representation discussed
- Problem three for small h representation discussed
## Machine Number Line and Precision Issues
- Finite difference issues arise when h is too small, leading to interaction of three problematic issues
- Round off error as another source of error when using small h due to machine number line structure
- Machine number line cannot handle real number line, affecting decimal precision over many magnitudes
- Discussion on predicting round off error occurrence
- Lowering h improves error from truncation of Taylor series until it becomes too low, causing glitches
- Ill-tempered results occur when unexpected outcomes arise despite previous stability
- Introduction of new notation for f(x + h) in numerical derivative context
- Mathematical results must land on a machine number, resulting in a value of one plus or minus Epsilon
- Discussion on epsilon as the smallest number such that one plus epsilon is not equal to one in machine precision
- Epsilon is the smallest number such that adding it to one does not result in one due to round off error
- Explanation of twiddles as the difference computed due to epsilons landing on a machine number
- Discussion on symbolic calculation of f(x + h) and f(x) in numerical derivatives
- Both f(x + h) and f(x) can catch epsilon, affecting the calculation
- Assumption made to simplify working with epsilon in numerical derivatives
## Control Steps in Numerical Calculations
- Control step discussion regarding handling plus or minus issues in calculations
- Discussion on handling inequalities by taking absolute values of plus-minus structures to maximize magnitude
- Taking absolute values of plus-minus structures to prevent subtraction from reducing magnitude
- Assumption that |f(x)| > |f(x + h)| to simplify inequality handling
- If |f(x)| > |f(x + h)|, then |f(x + h) - f(x)| ≤ 2ε|f(x)|
- Discussion on upper bounds in numerical calculations with additions and subtractions
- Assume f(x) is larger in magnitude than f(x + h) for simplification
- Bound established for |f(x + h) - f(x)| to be smaller than twice the magnitude of f(x)
## Precision Representation
- Delta f tilde represents what the computer sees, while delta f is the exact mathematical symbol
- Tilde indicates the difference between infinite precision and computer representation
- Discussion on dividing Delta f by h in numerical calculations
- Delta f tilde divided by h must be less than delta f exact divided by h plus two epsilon over h
## Magnitude Effects on Derivatives
- As h approaches zero, the magnitude of the effect on the derivative is significant
- Assumption of first-order accuracy in numerical calculations
- f tilde is less than the expression but equal to f prime plus big O of h to the one for first order accuracy
- Plus two epsilon over h in the expression for first order accuracy
- Discussion on moving f prime to the left side of the equation
- Delta f tilde represents the computer's view of the distance in outputs, while delta f is the exact mathematical distance
- Discussion on building a derivative as rise over run, with tilde representing the computer's view
- Contrast between f prime of x as the exact mathematical derivative and the computer's derivative
- Discussion on error in derivative approximation as mentioned by Xavier
- Loss of equality in mathematical argument for the point due to big O h plus two epsilon over h magnitude f
- Big O notation discussed as a way to describe the order of magnitude without specifying the constant multiplier
- Discussion on the power of h in the expression with two epsilon over h times magnitude f
- Big O notation indicates truncation error in Taylor series as a source of error
- Discussion on the second term contributing to overall error, including truncation and epsilon introduction
- Epsilon is the smallest number such that adding it to one results in a number different from one, due to rounding error on the machine number line
- Error in computation arises from truncating Taylor series and finite precision floating point arithmetic
- c sub eight or c h becomes approximately two epsilon magnitude f of x over h
## Error Transition Analysis
- Discussion on the transition point where truncation error and rounding error are equal in magnitude
- Analysis of the graph to determine when truncation error is dominant and when rounding error takes over
- Confirmation on understanding the transition point where truncation error equals rounding error
- Discussion on the approximation of h squared as two epsilon magnitude f over c
## Symbolic Computation and Order Notation
- Discussion on symbolic computation and order notation in first order accurate finite difference
- Agreement on considering the magnitude as order Epsilon without detailed computation
- Discussion on h being approximately on the order of the square root of epsilon
- Epsilon is the smallest number such that adding it to one results in a number different from one, due to rounding error on the machine number line
- Discussion on the impact of epsilon on the choice of h, where h is on the order of the square root of epsilon
- Estimation of case transition from truncation error to round-off error based on epsilon and method knowledge
- Estimation of how small h can be made before the trade-off between truncation error and rounding error becomes significant
- Discussion on using inequalities to maintain mathematical relationships in equations
- Mention of practical approach to estimating where round-off error begins to dominate in calculations
## Epsilon Impact on Calculations
- Epsilon is expected to be very small, potentially around 2 x 10^-10
- Focus is on the order of magnitude rather than precise values
- Epsilon drives the calculations, but exact values are not typically sought
- Aim is to gauge how small values can go before causing computational glitches
- Subtraction of nearly equal numbers, large scale arithmetic, and division by zero can lead to computational glitches
- Recommendation to avoid making values too small to prevent round-off error
- Exact numerical analysis is not typically performed unless necessary for deep analysis
- Importance of stating the accuracy level in numerical methods