I think these were both answered on this page: http://community.boredofstudies.org...320/introductory-probability.html#post7154145 .Thanks InteGrand. I've another question:
I think these were both answered on this page: http://community.boredofstudies.org...320/introductory-probability.html#post7154145 .Thanks InteGrand. I've another question:
Aaah...found them, thanks again!I think these were both answered on this page: http://community.boredofstudies.org...320/introductory-probability.html#post7154145 .
This was answered here: http://community.boredofstudies.org/238/extracurricular-topics/350045/dot-product.html#post7148468 .Aaah...found them, thanks again!
I get how to do the first two parts, How would you do c?
Thanks
Thanks
Thanks.
Thanks.
The problem I'm having is that I get cos(DEF) as a number greater than 1, so that the angle is undefined when I take its inverse cos. And for the second part, I plan to use the formula Area = 1/2 ab sin(c) to calculate the area, but can't do this...Maybe I'm doing something wrong???
MATH1151 ALG Ch4 Q22Thanks.
The problem I'm having is that I get cos(DEF) as a number greater than 1, so that the angle is undefined when I take its inverse cos. And for the second part, I plan to use the formula Area = 1/2 ab sin(c) to calculate the area, but can't do this...Maybe I'm doing something wrong???
Thanks, it is indeed easier!
I found my mistake, so when I calculate DE = (1,1,1) and EF = (1,1,2), I find cos(DEF) = 4/sqrt(18). Previously, I found 4/(sqrt(12). I feel so dumb....
Thanks!MATH1151 ALG Ch4 Q22
a) Vec(DE) . Vec(FE) = (e-d).(e-f) = [1,1,1]T . [-1,-1,2]T = -4
|DE|2=sqrt(1+1+1)=sqrt(3)
|FE|2=sqrt(1+1+4)=sqrt(6)
Hence cos<DEF = (-4)/sqrt(3*6) = -4/(3sqrt2)
But yeah use the method InteGrand gave for part b)
A justification like that for why there's no solution would require us to put the augmented matrix into row-echelon form first, and then inspect it.Thanks, it is indeed easier!
I found my mistake, so when I calculate DE = (1,1,1) and EF = (1,1,2), I find cos(DEF) = 4/sqrt(18). Previously, I found 4/(sqrt(12). I feel so dumb....
Thanks!
Also, there isn't an answer provided at the back of the problems book for this, could you please explain if I am right or not? Here is the question:
The answer I get is:
Equation of altitude through A (l1) = (0,1,2) + t(2,-6,-14)
Equation of altitude through B (l2) = (-1,4,1) + u(0,3,3)
I found u and t by the cross product of BC and CD as well as AD and AC respectively.
Then, to determine if the lines intersect or not:
As evident, b has to be a leading column, and the leading element in b doesn't equal 0 as lines aren't parallel. So altitudes don't intersect as there is no solution to this matrix.
Pretty much like InteGrand said, don't forget the methods that Josef Dick explainedThanks, it is indeed easier!
I found my mistake, so when I calculate DE = (1,1,1) and EF = (1,1,2), I find cos(DEF) = 4/sqrt(18). Previously, I found 4/(sqrt(12). I feel so dumb....
Thanks!
Also, there isn't an answer provided at the back of the problems book for this, could you please explain if I am right or not? Here is the question:
The answer I get is:
Equation of altitude through A (l1) = (0,1,2) + t(2,-6,-14)
Equation of altitude through B (l2) = (-1,4,1) + u(0,3,3)
I found u and t by the cross product of BC and CD as well as AD and AC respectively.
Then, to determine if the lines intersect or not:
As evident, b has to be a leading column, and the leading element in b doesn't equal 0 as lines aren't parallel. So altitudes don't intersect as there is no solution to this matrix.
Thanks leehuan! And yeah you're right, my notation is wrong for the augmented matrix, appreciate that.Pretty much like InteGrand said, don't forget the methods that Josef Dick explained
1. Use Gaussian elimination to bring out a R-E form.
Via matlab, assuming that your altitude through A is correct (the one through B definitely is correct) the reduced row-echelon form is
1 0 | 0
0 1 | 0
0 0 | 1
(Obviously, there's no need to actually bring out the reduced row echelon form on paper)
2. So since the right hand column is indeed a leading column like you said, there exists no solution. Hence the conclusion is also correct.
Edit: Also, this is me being pedantic now but technically Ax=b is this
Augmented matrix notation is technically [A | b]
A proof of 1) may be found here: http://community.boredofstudies.org/238/extracurricular-topics/350230/least-squares.html .Thanks leehuan! And yeah you're right, my notation is wrong for the augmented matrix, appreciate that.
I have a few more questions:
Thanks, I'll leave the latter for later!A proof of 1) may be found here: http://community.boredofstudies.org/238/extracurricular-topics/350230/least-squares.html .
For 2), it is known as the Gram-Schmidt orthonormalisation process, and the proof is typically done by induction.
ThanksIt's actually possible to prove both distributive laws by considering component wise what goes on and then using some rules over the real numbers.
Associativity is much harder (because you can't assume distribution to prove association obviously). If I remember to I'll go grab my solution when my break ends
Thanks, I'll leave the latter for later!
I'm pretty frustrated by these...What would be the shortest way of doing them?
Prove the following properties of matrices:
1) If the product AB exists, then A(λB) = λ(AB) = (λA)B
2) Associative law of matrix multiplication. If products AB and BC exist, then A(BC) = (AB)C
3) AI = A and IA = A where I represents identity matrices of the appropriate (possibly different) sizes
4) Left distributive law. If A+B and AC exist, then (A+B)C = AC+BC
5) Right distributive law If B+C and AB exist, then A(B+C) = AB+AC
I know you could do these writing each of them out...but it takes way to long to do so test-wise (+ its frustrating).
You don't need to write out an actual nxn matrix. You just need to use the definition of the i,j entry of a matrix product as a sum (essentially a dot product) and use properties of summations, denoting the i,j entry of a matrix A by say aij.Thanks
But is there a way to prove the distributive laws WITHOUT considering the matrices component wise?
You don't need to write out an actual nxn matrix. You just need to use the definition of the i,j entry of a matrix product as a sum (essentially a dot product) and use properties of summations, denoting the i,j entry of a matrix A by say aij.
Is this right?You don't need to write out an actual nxn matrix. You just need to use the definition of the i,j entry of a matrix product as a sum (essentially a dot product) and use properties of summations, denoting the i,j entry of a matrix A by say aij.
Is this right?
This is for the first question, btw