# The data and the assignment are bellow i only need answers for the last questions i can provide answers

## Top Questions

them on server’s console in PYTHON. For example, the client sends data packet to the server, and the server extracts the clients MAC address, IP address, protocol type, port address and application type from the received packet. Then the server displays extracted information on the console. The tool should give the user the ability to select which protocol (TCP or UDP), and which application (http, smtp, ftp, …etc.) to monitor.
View More

first 2 weeks. 50 people who joined the programme are sampled, their weight loss is 9 pounds with a standard deviation of 2.8 pounds. Can we conclude at the .05 level that a person joining the programme will lose less than 10 pounds? (2) The following is a random sample of 90-day futures prices in dollars for 1 troy oz. of silver from The Wall Street Journal issues in May and June of 1997: 4.74, 4.77, 4.87, 4.91, 4.83, 4.72, 4.92, 4.86, 4.97, 4.71, 4.90, 4.93, 4.75, 4.88, 4.79, 4.83, 4.89. Required: a. Calculate the mean b. Median c. Standard deviation of the 90-day future price of silver data (3) A mining company needs to estimate the average amount of copper ore per ton mined. A random sample of 50 tons gives a sample mean of 146,75 pounds. The population standard deviation is assumed to be 35.2 pounds. Required: a. Give a 95% confidence interval for the average amount of copper in the population of tons mined. b. Give a 90% confidence interval for the average amount of coper per ton c. Give a 99% confidence interval for the average amount of coper per ton (4) An e-commerce Website gets 2,385 visitors on a particular day. Among these, 1790 visitors explore the products by looking at more pages at the site. Among these 1790 visitors who explore the products, 387 make a purchase. Required: a. If a visitor chosen at random from all those who visited the site, what is the probability that the visitor explored the products b. If a visitor is chosen at random from all those who visited the site, what is the probability that the visitor made a purchase. c. If a visitor is chosen at random from all those who explored the products, what is the probability that the visitor made a purchase. d. Which of the preceding three probabilities is relevant to the design of the home page that leads to product page.
View More

ooking at how likely a given email is to be spam based on the words it contains. In particular, in this problem we’re going to count how often words are present in spam emails within some set of training data (which here means a set of emails that have already been marked as spam or not spam manually). We have already started to write a function spam_score(spam_file, not_file, word), which takes in two filenames, along with a target word (a lowercase string). Both filenames refer to text files which must be in the same directory as hw07.py (we’ve provided several such files in hw07files.zip). The text files contain one email per line (really just the subject line to keep things simple) - you can assume that these emails will be a series of words separated by spaces with no punctuation. The first file contains emails that have been identified as spam, the second contains emails that have been identified as not spam. Since you haven’t learned File I/O yet, we’ve provided code that opens the two files and puts the data into two lists of strings (where each element is one line - that is, one email). You then must complete the function, so that it returns the spam score for the target word. The spam score is an integer representing the total number of times the target word occurs across all the spam emails, minus the total number of times the word occurs in not-spam emails. Convert all words to lowercase before counting, to ensure capitalization does not throw off the count.
View More

, or statistics. 2. Be careful with those missing data. They can be empty string, -1, -98, -99, etc. You will need to check the data and var dictionary to make your best judgement. 2. Use those selected columns to predict the "loan_default" column. You will try 3 machine learning algorithms * Logistic regression * KNN * Naive Bayes Classifier 3. For each algorithm, you should select features and fit the model, then predict and evaluate. 4. Try different techniques to improve the model score. You can choose different columns, transform data and normalize data. Show your improvements.
View More

ly buy stock in companies that are ranked in the 80thpercentile or above in terms of dividends paid in the previous year. You are looking at a company that ranked 5 of 70 companies that paid dividends in 2019. a. Will this company qualify for your portfolio? b. If you had the data on the total dividends paid by each of the 70 companies, which measure of average would be the most meaningful –mean, median, midrange, or mode? Explain
View More