This paper presents a system to recognize a wide variety of action categories from videos. The system is trained on the UCF101 action recognition data set of temporally trimmed videos. For each video, the system extracts dense trajectory features and encodes the features as Fisher Vectors. The Fisher Vector records the means and covariances of each mode in the Gaussian Mixture Model (GMM) for each of the descriptors types, which include HOG, HOF, MBHx, and MBHy. After Principal Component Analysis, an one-vs-rest SVM is employed to classify each feature vector. The experimental results demonstrate significant improvement against the baseline.