MAT1001 Differential Calculus: Lecture Notes
\(\newcommand{\footnotename}{footnote}\)
\(\def \LWRfootnote {1}\)
\(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\)
\(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\)
\(\let \LWRorighspace \hspace \)
\(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\)
\(\newcommand {\TextOrMath }[2]{#2}\)
\(\newcommand {\mathnormal }[1]{{#1}}\)
\(\newcommand \ensuremath [1]{#1}\)
\(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \)
\(\newcommand {\setlength }[2]{}\)
\(\newcommand {\addtolength }[2]{}\)
\(\newcommand {\setcounter }[2]{}\)
\(\newcommand {\addtocounter }[2]{}\)
\(\newcommand {\arabic }[1]{}\)
\(\newcommand {\number }[1]{}\)
\(\newcommand {\noalign }[1]{\text {#1}\notag \\}\)
\(\newcommand {\cline }[1]{}\)
\(\newcommand {\directlua }[1]{\text {(directlua)}}\)
\(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\)
\(\newcommand {\protect }{}\)
\(\def \LWRabsorbnumber #1 {}\)
\(\def \LWRabsorbquotenumber "#1 {}\)
\(\newcommand {\LWRabsorboption }[1][]{}\)
\(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\)
\(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\)
\(\def \mathcode #1={\mathchar }\)
\(\let \delcode \mathcode \)
\(\let \delimiter \mathchar \)
\(\def \oe {\unicode {x0153}}\)
\(\def \OE {\unicode {x0152}}\)
\(\def \ae {\unicode {x00E6}}\)
\(\def \AE {\unicode {x00C6}}\)
\(\def \aa {\unicode {x00E5}}\)
\(\def \AA {\unicode {x00C5}}\)
\(\def \o {\unicode {x00F8}}\)
\(\def \O {\unicode {x00D8}}\)
\(\def \l {\unicode {x0142}}\)
\(\def \L {\unicode {x0141}}\)
\(\def \ss {\unicode {x00DF}}\)
\(\def \SS {\unicode {x1E9E}}\)
\(\def \dag {\unicode {x2020}}\)
\(\def \ddag {\unicode {x2021}}\)
\(\def \P {\unicode {x00B6}}\)
\(\def \copyright {\unicode {x00A9}}\)
\(\def \pounds {\unicode {x00A3}}\)
\(\let \LWRref \ref \)
\(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\)
\( \newcommand {\multicolumn }[3]{#3}\)
\(\require {textcomp}\)
\(\require {upgreek}\)
\(\newcommand {\intertext }[1]{\text {#1}\notag \\}\)
\(\let \Hat \hat \)
\(\let \Check \check \)
\(\let \Tilde \tilde \)
\(\let \Acute \acute \)
\(\let \Grave \grave \)
\(\let \Dot \dot \)
\(\let \Ddot \ddot \)
\(\let \Breve \breve \)
\(\let \Bar \bar \)
\(\let \Vec \vec \)
\(\require {cancel}\)
\(\newcommand {\LWRsubmultirow }[2][]{#2}\)
\(\newcommand {\LWRmultirow }[2][]{\LWRsubmultirow }\)
\(\newcommand {\multirow }[2][]{\LWRmultirow }\)
\(\newcommand {\mrowcell }{}\)
\(\newcommand {\mcolrowcell }{}\)
\(\newcommand {\STneed }[1]{}\)
\(\def \ud {\mathrm {d}}\)
\(\def \ui {\mathrm {i}}\)
\(\def \uj {\mathrm {j}}\)
\(\def \uh {\mathrm {h}}\)
\(\newcommand {\R }{\mathbb {R}}\)
\(\newcommand {\N }{\mathbb {N}}\)
\(\newcommand {\C }{\mathbb {C}}\)
\(\newcommand {\Z }{\mathbb {Z}}\)
\(\newcommand {\CP }{\mathbb {C}P}\)
\(\newcommand {\RP }{\mathbb {R}P}\)
\(\def \bk {\vec {k}}\)
\(\def \bm {\vec {m}}\)
\(\def \bn {\vec {n}}\)
\(\def \be {\vec {e}}\)
\(\def \bE {\vec {E}}\)
\(\def \bx {\vec {x}}\)
\(\def \uL {\mathrm {L}}\)
\(\def \uU {\mathrm {U}}\)
\(\def \uW {\mathrm {W}}\)
\(\def \uE {\mathrm {E}}\)
\(\def \uT {\mathrm {T}}\)
\(\def \uV {\mathrm {V}}\)
\(\def \uM {\mathrm {M}}\)
\(\def \uH {\mathrm {H}}\)
\(\DeclareMathOperator {\sech }{sech}\)
\(\DeclareMathOperator {\csch }{csch}\)
\(\DeclareMathOperator {\arcsec }{arcsec}\)
\(\DeclareMathOperator {\arccot }{arcCot}\)
\(\DeclareMathOperator {\arccsc }{arcCsc}\)
\(\DeclareMathOperator {\arccosh }{arcCosh}\)
\(\DeclareMathOperator {\arcsinh }{arcsinh}\)
\(\DeclareMathOperator {\arctanh }{arctanh}\)
\(\DeclareMathOperator {\arcsech }{arcsech}\)
\(\DeclareMathOperator {\arccsch }{arcCsch}\)
\(\DeclareMathOperator {\arccoth }{arcCoth}\)
\(\def \re {\textup {Re}}\)
\(\def \im {\textup {Im}}\)
\(\newcommand {\up }{\uppi }\)
\(\newcommand {\ut }{\uptheta }\)
\(\newcommand {\uw }{\upomega }\)
\(\newcommand {\uph }{\upphi }\)
\(\newcommand {\uvph }{\upvarphi }\)
Chapter 7 Calculus in Computer Science
In most of this module we have focussed on the mathematics, and even when trying to motivate the connection to computer science we have been fairly sketchy. In this chapter we are going to rectify that and discuss some concrete examples of where the material that you
have studied here is of relevance to Computer Science, AI, and Machine Learning.
If you try to read this chapter before covering all of the basics you may find the examples hard to follow, but if you have finished everything up to chapter 4, then you should be fine.
This chapter is split into sections, at least one of which should be relevant to each of the degree schemes that this module is included on, Computer Science, Computer Science and AI, and Robotics and AI. This does not mean that you will not find
examples of interest in other sections, it just means that I am trying to group the examples thematically. As with several other chapters, this is a work in progress and will be adapted and expanded over time. Be sure to check back frequently if you want to see the most up
to date examples.
7.1 LLMs and AI
A key concept in machine learning is the loss function, sometimes called the cost or error function, which calculates the difference between the output and the true or expected value. Loss functions appear generally in decision theory and the study of optimisation
problems. They appear whenever you are training an AI or Large Language model, where they measure the deviation of the model’s output against the true value for the training data.
It could be that you are checking if your model can perform a mathematical calculation, in which case you will be comparing two numbers, the output number against the true solution. It could also be a more general problem where you are training a model to recognise
images, in which case you will be comparing the answer output by your model to the true answer. The main idea is that this can always be described by some function. In fact a key step will be deciding what is an appropriate loss function for your problem.
Once we have the loss function, training is then the process of tweaking our model to minimise the loss function, so that the output of your model most closely matches the true values.
7.2 Robotics