Part 1: Compute Forward Variable for Hidden Markov Models
Compute the forward variable alpha for the observation sequence O=(0,1,0,2,0,1,0) given the HMM (without renormalization).Part 1A: Forward Variable for HMM with Two States
The HMM for Part 1A is specified below:
A = [[0.66, 0.34], [1, 0]] B = [[0.5, 0.25, 0.25], [0.1, 0.1, 0.8]] pi = [0.8, 0.2]
0.40, ?.????, 0.030230, 0.00559, ?.??????, ?.??????, ?.?????? 0.02, 0.0136, ?.??????, 0.00822, ?.??????, ?.??????, 0.000035
Part 1B: Forward Variable for HMM with Three States
The HMM for Part 1B is specified below:
A = [[0.8, 0.1, 0.1], [0.4, 0.2, 0.4], [0, 0.3, 0.7]] B = [[0.66, 0.34, 0], [0, 0, 1], [0.5, 0.4, 0.1]] pi = [0.6, 0, 0.4]
0.396, 0.107712, ?.??????, ?.??????, 0.003919, 0.001066, ?.?????? 0.000, ?.??????, ?.??????, ?.??????, ?.??????, ?.??????, ?.?????? 0.200, 0.071840, 0.030530, ?.??????, 0.003916, ?.??????, 0.000492
[HCI Implementation Credit] Part 1C: Forward Algorithm with Renormalization
Using the same HMM and the same observation sequence as in Part 1B, compute the forward variable
alpha with renormalization and also give the renormalization coefficients c for each time slice of
the alpha variable.
Your answers should be precise to within at least 6 decimal digits.
This link may help.
Renormalized alpha: 0.664430, ?.??????, ?.??????, ?.??????, 0.500200, ?.??????, ?.?????? 0.000000, 0.000000, ?.??????, ?.??????, ?.??????, 0.000000, ?.?????? ?.??????, ?.??????, ?.??????, 0.154160, ?.??????, 0.540368, 0.466385 Renormalization coefficients c: 1.677852, ?.??????, 2.054335, ?.??????, 2.240014, ?.??????, ?.??????
Part 2: Compute Backward Variable for Hidden Markov Models
Compute the backward variable beta for the observation sequence O=(0,1,0,2,0,1,0) given the HMM (without renormalization).Part 2A: Backward Variable for HMM with Two States
The HMM for Part 2A is identical to the HMM for Part 1A.
?.??????, ?.??????, 0.015187, ?.??????, ?.??????, 0.364, 1.0 0.001314, ?.??????, ?.??????, 0.038530, ?.??????, ?.???, ?.?
Part 2B: Backward Variable for HMM with Three States
The HMM for Part 2B is identical to the HMM for Part 1B.
?.??????, ?.??????, 0.006823, ?.??????, ?.??????, ?.???, ?.? 0.001862, ?.??????, ?.??????, ?.??????, ?.??????, 0.464, ?.? 0.002140, ?.??????, ?.??????, 0.034300, ?.??????, ?.???, ?.?
Part 3: Viterbi Algorithm
Find the most probable state sequence for the observation sequence O=(0,1,0,2,0,1,0) given the HMM using the Viterbi algorithm.
In this problem, both the indices of observation symbols and the state indices are assumed to be zero-based, i.e., the first row of A corresponds to state 0, second row of A corresponds to state 1 and so forth. This is also true for the indexing of the elements of pi. Similarly, the first column of B corresponds to observation symbol 0, the second column of B corresponds to observation symbol 1, and so on.
Part 3A: Viterbi Algorithm for HMM with Two States
The HMM for Part 3A is identical to the HMM for Part 1A.
?, ?, ?, ?, ?, ?, ?
Part 3B: Viterbi Algorithm for HMM with Three States
The HMM for Part 3B is identical to the HMM for Part 1B.
2, ?, ?, 1, ?, ?, ?
Part 4: Complete the First Project Proposal Peer Review
The peer review is submitted separately in Canvas.
Part 5: Complete the Second Project Proposal Peer Review
The peer review is submitted separately in Canvas.
Part 6: Complete the Third Project Proposal Peer Review
The peer review is submitted separately in Canvas.