Notes on Interval Matrix Theory
Published:
Interval matrix theory is a subfield of linear algebra concerned with obtaining solutions to systems of equations $Ax = b$, where the components $A, b$ take values in an interval.
Published:
Interval matrix theory is a subfield of linear algebra concerned with obtaining solutions to systems of equations $Ax = b$, where the components $A, b$ take values in an interval.
Published:
In his PhD thesis, Crites & Barto (1998) analyzed the problem of controlling multiple different elevators and modeled the system as a multi-agent POMDP, noting both the extreme size of the state space — with 4 elevators in a 10-story building, on the order of $10^{22}$ states — as well as the effectiveness of deep-$Q$-learning techniques in attacking the problem.
Published:
Published:
Interval matrix theory is a subfield of linear algebra concerned with obtaining solutions to systems of equations $Ax = b$, where the components $A, b$ take values in an interval.
Published:
In his PhD thesis, Crites & Barto (1998) analyzed the problem of controlling multiple different elevators and modeled the system as a multi-agent POMDP, noting both the extreme size of the state space — with 4 elevators in a 10-story building, on the order of $10^{22}$ states — as well as the effectiveness of deep-$Q$-learning techniques in attacking the problem.
Published:
Spent the week at a workshop at the University of Chicago, “Invitation to Algebraic Statistics,” part of the IMSI long program “Algebraic Statistics and our Changing World”. This week’s workshop featured excellent talks on the application of algebraic statistics to estimation & optimization problems, graphical models, neural networks, algebraic economics, and ecological problems.
Published:
This report describes an attempt to model using different statistical techniques the selection process for the Baseball Hall of Fame (HOF). We describe the history and election procedures of the Hall of Fame, and also the appropriateness of kernel estimation and machine learning methods to the problem of predicting which players will make it in.
Published:
We have a dataset of NFL player tracking coordinates: \(x_t, y_t\) over time. The goal is to forecast where each player will be a few frames ahead. That’s the whole game: predict motion.
Published:
At the start of this year, a few friends and I got together to play a predictions game. We all filled out 5x5 bingo cards with predictions that had to come due in 2024 on any topic in the world, with eternal honor going to the one who got a bingo. Here’s what my card looked like to start:
Published:
Back into the swing of things: attended two talks this week, and prepping for the IMSI Workshop to begin next Monday.
Published:
Published:
This report describes an attempt to model using different statistical techniques the selection process for the Baseball Hall of Fame (HOF). We describe the history and election procedures of the Hall of Fame, and also the appropriateness of kernel estimation and machine learning methods to the problem of predicting which players will make it in.
Published:
People say a model “scales” when the loss keeps dropping predictably as you feed it more data, parameters, or compute.
Published:
People say a model “scales” when the loss keeps dropping predictably as you feed it more data, parameters, or compute.
Published:
Stalled on Gaussian process techniques this week. Ultimately hard to find out what precisely the GP-regression could achieve – if we already have an aligned shape we can of course calculate its deformation field, and then we can transform the reference shape into the target shape. That might suggest that GP-regression should be able to go from an arbitrary shape to an aligned shape, which may indeed be the case. I do have some confusion about what the actual train and test of a GP-regression should be – ultimately I suppose it will have to output some coefficients for creating a weighted linear combination of the eigenbasis which is derived from the eigenfunctions of the GP kernel.
Published:
Beginning work on a miniproject based on reading the research of Marcel LĂĽthi and Thomas Gerig on Gaussian process morphable models. Their GPMMs are an expansion of the classical point distribution models. Given a set of shapes in space ${\Gamma_1,\dots,\Gamma_n}$, they set one shape as the reference shape and the consider the deformation fields $u_i$, which are vector fields from $\mathbb{R^3} \rightarrow \mathbb{R^3}$ such that for any shape $\Gamma_i$ there exists a deformation $u_i$ such that $\Gamma_i(x) = {\bar{x}+u(x)|}$.
Published:
Of late, have read much more about statistical shape models: their origins and the various ways people have implemented them, their connections to long-running computer vision research, and state of the art work with neural networks.
Published:
Landed in Berlin on the 19th of June to participate in the G-RIPS program at the Zuse Institut Berlin on the campus of the Freie Universität. Everyone very kindly welcomed me and the other students (most of whom Americans, one of whom from the UK) and introduced us to the two topics we will be working on - three of us, myself included, will be investigating statistical shape models.
Published:
Back into the swing of things: attended two talks this week, and prepping for the IMSI Workshop to begin next Monday.
Published:
Interval matrix theory is a subfield of linear algebra concerned with obtaining solutions to systems of equations $Ax = b$, where the components $A, b$ take values in an interval.
Published:
Interval matrix theory is a subfield of linear algebra concerned with obtaining solutions to systems of equations $Ax = b$, where the components $A, b$ take values in an interval.
Published:
People say a model “scales” when the loss keeps dropping predictably as you feed it more data, parameters, or compute.
Published:
This report describes an attempt to model using different statistical techniques the selection process for the Baseball Hall of Fame (HOF). We describe the history and election procedures of the Hall of Fame, and also the appropriateness of kernel estimation and machine learning methods to the problem of predicting which players will make it in.
Published:
We have a dataset of NFL player tracking coordinates: \(x_t, y_t\) over time. The goal is to forecast where each player will be a few frames ahead. That’s the whole game: predict motion.
Published:
Spent the week at a workshop at the University of Chicago, “Invitation to Algebraic Statistics,” part of the IMSI long program “Algebraic Statistics and our Changing World”. This week’s workshop featured excellent talks on the application of algebraic statistics to estimation & optimization problems, graphical models, neural networks, algebraic economics, and ecological problems.
Published:
Back into the swing of things: attended two talks this week, and prepping for the IMSI Workshop to begin next Monday.
Published:
Stalled on Gaussian process techniques this week. Ultimately hard to find out what precisely the GP-regression could achieve – if we already have an aligned shape we can of course calculate its deformation field, and then we can transform the reference shape into the target shape. That might suggest that GP-regression should be able to go from an arbitrary shape to an aligned shape, which may indeed be the case. I do have some confusion about what the actual train and test of a GP-regression should be – ultimately I suppose it will have to output some coefficients for creating a weighted linear combination of the eigenbasis which is derived from the eigenfunctions of the GP kernel.
Published:
Beginning work on a miniproject based on reading the research of Marcel LĂĽthi and Thomas Gerig on Gaussian process morphable models. Their GPMMs are an expansion of the classical point distribution models. Given a set of shapes in space ${\Gamma_1,\dots,\Gamma_n}$, they set one shape as the reference shape and the consider the deformation fields $u_i$, which are vector fields from $\mathbb{R^3} \rightarrow \mathbb{R^3}$ such that for any shape $\Gamma_i$ there exists a deformation $u_i$ such that $\Gamma_i(x) = {\bar{x}+u(x)|}$.
Published:
Of late, have read much more about statistical shape models: their origins and the various ways people have implemented them, their connections to long-running computer vision research, and state of the art work with neural networks.
Published:
Landed in Berlin on the 19th of June to participate in the G-RIPS program at the Zuse Institut Berlin on the campus of the Freie Universität. Everyone very kindly welcomed me and the other students (most of whom Americans, one of whom from the UK) and introduced us to the two topics we will be working on - three of us, myself included, will be investigating statistical shape models.
Published:
Published:
This report describes an attempt to model using different statistical techniques the selection process for the Baseball Hall of Fame (HOF). We describe the history and election procedures of the Hall of Fame, and also the appropriateness of kernel estimation and machine learning methods to the problem of predicting which players will make it in.
Published:
In his PhD thesis, Crites & Barto (1998) analyzed the problem of controlling multiple different elevators and modeled the system as a multi-agent POMDP, noting both the extreme size of the state space — with 4 elevators in a 10-story building, on the order of $10^{22}$ states — as well as the effectiveness of deep-$Q$-learning techniques in attacking the problem.
Published:
People say a model “scales” when the loss keeps dropping predictably as you feed it more data, parameters, or compute.
Published:
People say a model “scales” when the loss keeps dropping predictably as you feed it more data, parameters, or compute.
Published:
We have a dataset of NFL player tracking coordinates: \(x_t, y_t\) over time. The goal is to forecast where each player will be a few frames ahead. That’s the whole game: predict motion.
Published:
Stalled on Gaussian process techniques this week. Ultimately hard to find out what precisely the GP-regression could achieve – if we already have an aligned shape we can of course calculate its deformation field, and then we can transform the reference shape into the target shape. That might suggest that GP-regression should be able to go from an arbitrary shape to an aligned shape, which may indeed be the case. I do have some confusion about what the actual train and test of a GP-regression should be – ultimately I suppose it will have to output some coefficients for creating a weighted linear combination of the eigenbasis which is derived from the eigenfunctions of the GP kernel.
Published:
Beginning work on a miniproject based on reading the research of Marcel LĂĽthi and Thomas Gerig on Gaussian process morphable models. Their GPMMs are an expansion of the classical point distribution models. Given a set of shapes in space ${\Gamma_1,\dots,\Gamma_n}$, they set one shape as the reference shape and the consider the deformation fields $u_i$, which are vector fields from $\mathbb{R^3} \rightarrow \mathbb{R^3}$ such that for any shape $\Gamma_i$ there exists a deformation $u_i$ such that $\Gamma_i(x) = {\bar{x}+u(x)|}$.
Published:
Of late, have read much more about statistical shape models: their origins and the various ways people have implemented them, their connections to long-running computer vision research, and state of the art work with neural networks.
Published:
Landed in Berlin on the 19th of June to participate in the G-RIPS program at the Zuse Institut Berlin on the campus of the Freie Universität. Everyone very kindly welcomed me and the other students (most of whom Americans, one of whom from the UK) and introduced us to the two topics we will be working on - three of us, myself included, will be investigating statistical shape models.
Published:
In his PhD thesis, Crites & Barto (1998) analyzed the problem of controlling multiple different elevators and modeled the system as a multi-agent POMDP, noting both the extreme size of the state space — with 4 elevators in a 10-story building, on the order of $10^{22}$ states — as well as the effectiveness of deep-$Q$-learning techniques in attacking the problem.