Category Archives: Uncategorized

Salud Para Usted y Su Familia

[Health for You and Your Family]

Welcome to the project website

The long-term goal of Salud Para Usted y Su Familia [Health for You and Your Family] (SPUSF) is to reduce the incidence of overweight and obesity among Mexican-heritage children from limited-resource colonias/neighborhoods along the Arizona, New Mexico, and Texas borders with Mexico through a Promotora-led, family-based obesity prevention program that integrates research, education, and extension to target food and beverage consumption, physical activity, and screen-time by changing individual and family behaviors and the home environment in a coordinated manner.

  • USDA (2/01/2015-1/31/2020)
  • PI: Joseph Sharkey
  • Family-Focused Childhood Obesity Prevention

 

Instructions

You have received the link to each document over email, but we are keeping all the links here so it is easy for you keep track of all of them.

PCORI Award: Diabetes Education and Wellness Through Faith-based Organizations (FBOs) in Texas

The Patient-Centered Outcomes Research Institute (PCORI) has awarded Texas A&M University with a Tier I award in the amount of $15,000 for a 9-month period. Tier I awards fund the building of the community and capacity necessary to later develop a patient-centered comparative effectiveness research project.  The project is awarded to Mark Lawley of the Industrial and Systems Engineering Department to examine Diabetes Education and Wellness Through Faith-based Organizations (FBOs) in Texas.  Only 17% of proposals submitted were selected for PCORI’s Tier I award.

Diabetes is a chronic disease requiring behavior modification and lifestyle changes to manage the disease. Diabetes is the 7th leading cause of death in the U.S. and costs $245 billion per year.  Texas is the 5th leading state in diabetes prevalence.  It is difficult to effectively control the fast growing trend in diabetes prevalence in Texas since risk factors are very prevalent. For example, about 1 of 3 adults in Texas are obese, and 2 of 3 are either overweight or obese.  Also, more than 50% of adults in Texas are not physically active and about 3 of 4 adults have fewer than 5 servings of fruits and vegetables each day.

Fortunately, proper management reduces the risk of disease progression and complications.  Often, disease management is taught through diabetes education and wellness classes.  FBOs describe organizations or programs associated with a religious congregation and account for a variety of religious backgrounds (e.g., Christian, Catholic, Jewish, Muslim, etc.).  Some FBOs have successfully partnered with health promotion programs to provide preventative health services to at-risk populations with chronic diseases.  FBOs have regular access to a captive adult audience of patients and volunteers and they typically have strong community credibility. Therefore, FBOs will be of central importance in facilitating diabetes management and improving population health.

There are three main thrusts to be executed for this project: partnership development, communication structure, and leadership structure.  For partnership development, the goal is to build a partnership network of more than 40 researchers, diabetes educators, clinicians, patients, and FBOs who are interested in comparing the awareness, behavior modification, and disease-management success of patient populations who receive diabetes education and wellness from traditional sources vs. FBOs.   For communication, the team will utilize a listserv and is currently in the process of developing a website for the partnership network. Finally, for leadership development, the team will form an internal governance structure to facilitate discussions about using FBOs for diabetes education and wellness.

The partnership team initially consists of three researchers (Mark Lawley, Hye-Chung Kum, Michelle Alvarado) from Texas A&M University (TAMU) and the President (Charles Bell) of the Diabetes Health and Wellness Institute (DHWI) at Juanita J. Craft Recreation Center.  Mark Lawley, Ph.D., P.E., (PI) is the TEES Research and One Health Professor of Industrial and Systems Engineering and Biomedical Engineering. Hye-Chung Kum, Ph.D., MSW (Co-PI) is an associate professor of Health Policy and Management at the School of Rural Public Health in the Texas A&M Health Science Center. Michelle Alvarado, PhD, (Project Lead) is a postdoctoral research associate in Industrial and Systems Engineering Department at Texas A&M University.

PCORI’s mission is to help people make informed healthcare decisions, and improve healthcare delivery and its outcomes, by producing and promoting high-integrity, evidence-based information that comes from research guided by patients, caregivers, and the broader healthcare community.  This is the second year PCORI has funded Tier I awards in their “Pipeline to Proposal” process.  The Pipeline to proposal is a 3-tier process aimed to build a national community of patients, stakeholders, and researchers who have the expertise and passion to participate in patient-centered outcomes research that lead to high-quality research proposals. Upon successful completion of PCORI’s Tier I award, projects are eligible to advance to Tier II ($25,000 for 12 months) for further development of the partnerships.  This year, 27 of 30 projects advanced from Tier I to Tier II.  Another competitive process is required to receive a PCORI Tier III ($50,000 for 12 months) award whose purpose is to develop high quality research proposals.


Source: http://www.lchdhealthcare.org/information/diabetes-information/

“The incidence of type II diabetes is increasingly prevalent in the Texas population.  We feel that utilizing FBOs as a means of communicating diabetes education and wellness can be effective in reducing this prevalence.  The PCORI funding will be instrumental in allowing us to develop the partnerships necessary to pursue this research idea.”
Dr. Michelle Alvarado

Record Linkage Basics

What’s Record Linkage

Record linkage (RL), also named as to “duplicate detection”, “record matching”, “data matching” or “object identity problem”, refers to the task of finding entries that refer to the same individual in two or more files. It is an appropriate technique when you have to join data sets that do not have a unique database key in common. A data set that has undergone record linkage is said to be linked.

For example, in a table that belongs to University of North Carolina at Chapel Hill, one entry keeps one student’s information, contains a column “onyen” , a column “First Name”, and a column “Last Name”, and a column “SSN”. The last three items also maintained by a table from Bank of America to record its customers. Now we pick up one pair contained two entries, one from the first table and the other from the second one. If the 10 digital number SSN in two entries’ SSN are same, and the student’s name from both tables matches each other, we could determine this bank’s customer is the student from UNC; this pair is linked.

Record Linkage in Research

When research requires linking data between historical records and current survey or records, people use record linkage to build connection between old and new data sets. This is normal since information from records is updated during periods with the status changes. For longitudinal records research, reconstruct one data set must link each period data sets to track series of records.

We also need to use record linkage to link data between different agencies. Each agency, for their purpose, use specific formats to store data information. When people research among different areas, one need to link data sets from each of them. However, different agency information systems do not share a common ID. Without common IDs, linking data records reliably and accurately across different data sources is an important issue.

Basic Algorithm of Record Linkage

There are two basic methods of record linkage: deterministic record linkage and probabilistic record linkage.  Deterministic record linkage is defined as linking two (or more) tables based on agreement rules (exact, approximate, and partial) for matching variables, which are often structured hierarchically.  That is deterministic record linkage compares a group of identifiers or one identifier across databases; a link is made if all of the fields in record agree to an acceptable level. In practice, people use common IDs, like Social Security Number, birth dates, first and last names of individuals as basic fields to compare. Using combinations of different fields of identifying information could increase the validity of the link made.

Probabilistic Record Linkage is based on the assumption that no single match between variables common to the source databases will identify a client with complete reliability. One probability function indicates two records belong to one same client through calculation of identifying information, such as last and first name, birth date, Social Security Number or other fields existed at the same time in different data sets.

The process of record linkage can be conceptualized as identifying matched pairs among all possible pairs of observations from two data sets. People definite the set of true matches(M set) and the set of true nonmatches(U set) in practice, and also denoted in m probability and u probability. The m probability is the probability that a field agrees given that the record pair being examined is a matched pair, and the u probability is the probability that a field agrees given that the record pair being examined is not a matched pair. Although there are many methods to calculate M and U probabilities, maximum-likelihood-based methods such as the Expectation-Maximization(EM) algorithm is, as the recent studies shows, the most effective of all existed algorithms.

Using m and u probabilities, weight is defined to measure the contribution of each field to the probability of making an accurate classification of each pair into M or U sets. The “agreement” weight when a field agrees between the two records being examined is calculated as log2(m/u); the “disagreement” weight when a field does not agree is calculated as log2((1-m)/(1-u)). These weights will vary based on the distribution of values of the identifiers and indicate how powerful a particular variable is in determining whether two records are from the same individual.

Using the composite weights, calculated by summing the individual data field’s weights, one can classify each pair of records into three groups: a link when the composite weight is above a threshold value(U), a non link when the composite weight is below a threshold value(L), and a possible link for clerical review when the composite weight is between U and L. The threshold values can be calculated bythe accepted probability of false matches and the probability of false nonmatchs.

Based on the theory above, the main focus of record linkage research has been how to matching fields and how to determine the threshold values of U and L to improve the accuracy of determining what the threshold weight is for a certain link or non-link.

Fields Matching Methods

Records Matching Techniques

Conclusion

References:

Dunn, Halbert L. (December 1946). “Record Linkage” American Journal of Public Health 36 pp. 1412–1416.

Studies of Welfare Populations: Data Collection and Research Issues (Editors M. Ver Ploeg, A. Moffit, and C. Citro, National Academy Press) titled Matching and Cleaning Administrative Data. Author Bong-Ju Lee.

Ahmed K. Elmagarmid, Panagiotis G. Ipeirotis, Vassilios S. Verykios. Duplicate Record Detection: A Survey. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 19 JANUARY 2007

Record Linkage References

Why is this important

  • [RECOMMENDED] Goth G. Running on EMPI. Health information exchanges and the ONC keep trying to find the secret sauce of patient matching. Health data management. 2014;22(2):52-, 4, 6 passim.

Detailed survey in computer science

  • [RECOMMENDED] Ahmed K. Elmagarmid, Panagiotis G. Ipeirotis, Vassilios S. Verykios. Duplicate Record Detection: A Survey. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 19 JANUARY 2007
  • [RECOMMENDED] M. Elfeky, V. Verykios, A. Elmagarmid. TAILOR: A Record Linkage Tool Box. In Proceedings of the 18th International Conference on Data Engineering (ICDE 2002). IEEE Computer Society, Washington, DC, USA
  • N. Koudas, S. Sarawagi, and D. Srivastava. Record linkage: similarity measures and algorithms. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data (SIGMOD ’06). ACM, New York, NY, USA, 802-803. DOI=10.1145/1142473.1142599 http://doi.acm.org.libproxy.lib.unc.edu/10.1145/1142473.1142599

What is actually done in the field

  • [RECOMMENDED] S. Weber, H. Lowe, A. Das, et al. A simple heuristic for blindfolded record linkage. J Am Med Inform Assoc. 2012.
  • [RECOMMENDED] F. Boscoe, D. Schrag, K. Chen, et al. Building capacity to assess cancer care in the Medicaid population in New York State. Health Services Research 2011;46(3): 805-20
  • https://www.census.gov/srd/papers/pdf/rrs2006-02.pdf

Private Record Linkage

  • [RECOMMENDED]Rob Hall and Stephen E. Fienberg: Privacy-Preserving Record Linkage. Privacy in Statistical Databases 2010: Lecture Notes in Computer Science, 2011, Volume 6344/2011, pp 269-283, DOI: 10.1007/978-3-642-15838-4_24.
  • Vatsalan, D., Christen, P., & Verykios, V. S. (2013). A taxonomy of privacy-preserving record linkage techniques. Information Systems, 38(6), 946-969
  • L. Bonomi, L. Xiong, J. Lu. LinkIT: Privacy Preserving Record Linkage and Integration via Transformations (demo track). In SIGMOD, 2013
  • http://hiplab.mc.vanderbilt.edu/projects/soempi/ (most recent work in the field)
  • A. Inan, M. Kantarcioglu, E. Bertino, and M. Scannapieco. A hybrid approach to private record linkage. In ICDE, pp 496-505. IEEE, 2008
  • T. Churches and P. Christen. Blind data linkage using n-gram similarity comparisons. In H. Dai, R. Srikant, and C. Zhang, editors, PAKDD, volume 3056 of Lecture Notes in Computer Science, pp 121-126. Springer, 2004

Recent papers based on data mining and machine learning techniques

  • McCoy AB, Wright A, Kahn MG, Shapiro JS, Bernstam EV, Sittig DF. Matching identifiers in electronic health records: implications for duplicate records and patient safety. Bmj Quality & Safety. Mar 2013;22(3):219-224.
  • Peter Christen. 2008. Automatic Record Linkage using Seeded Nearest Neighbor and Support Vector Machine Classification. Proceedings of the ACM SIGKDD 2008 conference, Las Vegas, August 2008.
  • Sunita Sarawagi and Anuradha Bhamidipaty. 2002. Interactive deduplication using active learning. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’02). ACM, New York, NY, USA, 269-278. DOI=10.1145/775047.775087 http://doi.acm.org/10.1145/775047.775087
  • Bilenko, M.; Kamath, B.; Mooney, R.J.; , “Adaptive Blocking: Learning to Scale Up Record Linkage,” Data Mining, 2006. ICDM ’06. Sixth International Conference on , vol., no., pp.87-96, 18-22 Dec. 2006
    doi: 10.1109/ICDM.2006.13

Corner Stone Papers for Probabilistic Record Linkage

  • H. B. Newcombe, J. M. Kennedy, S. J. Axford, and A. P. James. Automatic Linkage of Vital Records, Science, 130, pp. 954-959. 1959
  • I. P. Fellegi and A. B. Sunter. A theory for record linkage. Journal of the American Statistical Association 1969;64: pp 1183–1210

Papers that look at the impact of record linkage on analysis

  • I. Baldi, A. Ponti, R. Zanetti, G. Ciccone, F. Merletti, and D. Gregori. The impact of record-linakge bias in the Cox model. Journal of Evaluation in Clinical Practice. 16: 92-96. 2010.
  • P. Lahiri and M. Larsen. Regression analysis with linked data. Journal of the American Statistical Association, 100(469):222-230, March 2005
  • F. Scheuren and W. E. Winkler. Regression Analysis of Data Files That Are Computer Matched – Part II. Survey Methodology, 23, 157-165. 1997.

Available Software

  • P. Jurczyk, J. J. Lu, L. Xiong, J. D. Cragan, A. Correa, FRIL: A Tool for Comparative Record Linkage, American Medical Informatics Associations (AMIA) 2008 Annual Symposium
  • Febrl
  • Linkagewiz. http://www.linkagewiz.com/index.htm
  • K. Campbell, D. Deck, and A. Krupski. 2008. Record linkage software in the public domain: a comparison of Link Plus, The Link King, and a `basic’ deterministic algorithm. Health Informatics Journal March 2008 vol. 14 no. 1 5-15

Two CS faculty who focus on record linkage

Managing Diabetes in the Digital World

Submitted Proposals

  • NSF: A Smart Diabetes Management System (SDMS)
    PI: Dr. Lawley
  • Google: Virtual Village: Multimedia Social Networking for Managing Type 2 Diabetes
    PI: Dr. Kum

Current Projects

  • modeling scheduling and no show at the clinic
  • surveying access the technology among clients
  • modeling continuous glucose monitoring data

Prannay-Timesheet

Prannay-Timesheet

Spring RA 2016 (19 Jan’16 – 15 May’16)

Total Hours: (20h*15weeks=300H)

Accumulated Hours Worked: 174H till 04/04
Remaining Hours: 126H
Vacation: 20H

04/04/2016: compare_by final code touch     [6H]——-174
03/30/2016: Cohort template 1 update           [10H]
03/26/2016: Finalized output excel – cohort  [9H]
03/25/2016: Finalized code for cohort            [10H]
03/24/2016: edited compare_by macro          [4H]
03/23/2016: edited compare_by macro          [4H]
03/22/2016: edited compare_by macro          [4H]
03/21/2016: edited compare_by macro          [4H]
03/20/2016: edited compare_by macro          [10H]
03/19/2016: edited compare_by macro         [10H]
03/10/2016: Started looking code for waiver [4H]——-103
03/09/2016: Started looking code for waiver [4H]
03/07/2016: Started looking code for waiver [4H]
03/06/2016: Started looking code for waiver [4H]
03/04/2016: Started looking code for waiver [4H]
03/02/2016: Started looking code for waiver [4H]
03/01/2016: Started looking code for waiver [4H]
02/29/2016: Started looking code for waiver [4H]
02/28/2016: Started looking code for waiver [4H]
02/26/2016: Started looking code for waiver [4H]
02/24/2016: Started looking code for waiver [4H]
02/22/2016: Started looking code for waiver [4H]
02/21/2016: Started looking code for waiver [4H]
02/19/2016: Started looking code for waiver [4H]
02/17/2016: Started looking code for waiver [4H]
02/15/2016: Started looking code for waiver [4H]
02/13/2016: Started looking code for waiver [4H]
02/12/2016: Started looking code for waiver [4H]
02/10/2016: Started looking code for waiver [4H] —— 31
02/08/2016: Started looking code for waiver [5H]
02/07/2016: Started looking code for waiver [5H]
02/05/2016: Started looking code for waiver [4H]
02/03/2016: Started looking code for waiver [4H]
02/02/2016: Strata meeting with Loida [2H]
02/01/2016: Strata [4H]
01/25/2016: Meeting/HIPAA [3H]

 

Fall Research 2015 (01 Sep’15 – 15 Dec’15)

Total Hours: (3units*2=280H)
Accumulated Hours Worked : 112H till 10/30
Remaining Hours : 168H
Vacation : 16H+4H

11/20/2015: Thesis meeting [1H]
11/19/2015: Implemented Algorithm point 2 [10H]
10/30/2015: Thesis meeting [2H]
10/29/2015: Started code for final db and hash of count [12H]
10/23/2015: Thesis meeting [2H]
10/22/2015: Thesis work, Algorithm for 100% contribution [8H]
10/16/2015: Thesis meeting [2H]———-78
10/15/2015: Thesis work [4H]
10/09/2015: Poster [10H]
10/02/2015: Research meeting [2H]
10/01/2015: Research paper on tree [8H]
09/21/2015: Research meeting [1H]
09/20/2015: Research [5H]
09/16/2015: Research [5H]
09/08/2015: Research poster presentation [2H]
09/07/2015: Research poster writing [10H]
09/06/2015: Research poster writing [10H]
09/05/2015: Research poster writing [10H]
09/03/2015: Writing sem plan [3H]
09/01/2015: Revising the work done yet for website and setting env.[4H]
08/31/2015: Thesis Meeting [2H]

 

Fall RA 2015 (01 Sep’15 – 15 Dec’15) (2H remaining overall )

Total Hours: (20h*15weeks=300H) – 12H from Summer = 288H 
Accumulated Hours Worked: 266 H till 12/02
Remaining Hours: 22H
Vacation: 16H+4H

12/02/2015 : Worked on eg68 [6H]——————-266
11/26/2015 : Worked on eg68 [3H]
11/25/2015 : Worked on eg68 [0.5H]
11/20/2015 : Worked on eg68 [9.5H]
11/18/2015 : Worked on eg68 [8H]
11/16/2015 : Worked on eg68 [8H]
11/07/2015 : Worked on after report [5H]————231
11/06/2015 : Worked on after report [10H]
10/30/2015 : Worked on after report [8H]
10/28/2015 : Worked on after report [8H]
10/23/2015 : Worked on after report [3H]
10/21/2015 : Worked on after report [8H]
10/19/2015 : Worked on after report [8H]——-189
10/16/2015 : Worked on after report [8H]
10/14/2015 : Worked on after report [8H]
10/09/2015 : Worked on report [10H]
10/07/2015 : Worked on report [10H]
10/05/2015 : Worked on report [10H]
10/04/2015 : Worked on report [10H]
10/03/2015 : Worked on Primary Care [4H]
10/02/2015 : Worked on Primary Care [8H]
09/30/2015 : Worked on Primary Care [8H]———————113
09/28/2015 : Worked on ACS [8H]
09/25/2015 : Worked on ACS [8H]
09/23/2015 : Worked on ACS [8H]
09/21/2015 : Worked on report [5H]
09/20/2015 : Worked on report [10H]
09/19/2015 : Worked on report [10H]
09/18/2015 : Worked on comorb and nyu cleaning [8H]
09/16/2015 : Worked on comorb and nyu cleaning [8H]
09/14/2015 : Worked on comorb and nyu cleaning [8H]
09/11/2015 : Worked on comorb and nyu cleaning [8H]
09/10/2015 : Worked on comorb and nyu cleaning [8H]
09/04/2015 : Worked on comorb and nyu cleaning [8H]
09/02/2015 : Worked on comorb and nyu cleaning [8H]
 

Summer Research 2015 (02 Jun’15 – 12 Aug’15)

Total Hours: (3units*2=280H)
Accumulated Hours Worked : 211H till 08/09
Remaining Hours : 69H

08/09/2015 : Worked on upload 3 csv module [8H]
08/08/2015 : Worked on upload 3 csv module [8H]
08/06/2015 : Worked on thesis-meeting [8H]
08/03/2015 : Worked on thesis-meeting [3H]
07/30/2015 : Read Dr. Kum’s proposal [3H]
07/29/2015 : Read Dr. Kum’s proposal [2H]
07/28/2015 : Worked on display after upload [2H]————179H
07/27/2015 : Worked on display after upload [2H]
07/26/2015 : Worked on display after upload [8H]
07/25/2015 : Worked on display after upload [8H]
07/24/2015 : Worked on thesis-meeting [2H]
07/23/2015 : Read paper on differential privacy jamia [3H]
07/22/2015 : Read paper on differential privacy jamia [2H]
07/21/2015 : Worked on thesis-meeting [2H]
07/20/2015 : Again Read paper no l-diversity [2H]
07/19/2015 : Worked on upload csv module [8H]
07/18/2015 : Worked on upload csv module [8H]
07/17/2015 : Worked on thesis-meeting [2H]——————132H
07/16/2015 : Read paper no free lunch [3H]
07/15/2015 : Read paper no free lunch [3H]
07/14/2015 : Reading ruby tutorials [3H]
07/13/2015 : Worked on setting rubymine, git on linux [3H]
07/12/2015 : Worked on website in ruby [8H]
07/11/2015 : Worked on setting ruby on linux [8H]
07/10/2015 : Worked on thesis-meeting [3H]
07/09/2015 : Worked on thesis [4H] –—————————-99H
07/08/2015 : Worked on thesis-meeting [3H]
07/07/2015 : Worked on thesis [2H]
07/06/2015 : Worked on thesis [3H]
07/05/2015 : Worked on thesis [8H]
07/04/2015 : Worked on thesis [8H]
06/29/2015 : Worked on thesis-meeting [2H]
06/27-28/2015 : Worked on thesis [12H]
06/20-21/2015 : Worked on thesis [12H]
06/19/2015 : Worked on thesis-meeting [2H]
06/18/2015 : Worked on thesis [8H]
06/17/2015 : Worked on thesis [8H]
05/31/2015 : Worked on thesis-meeting [8H]
05/30/2015 : Worked on thesis [8H]
05/29/2015 : Worked on thesis [8H]
05/13/2015 : Started on thesis-meeting [3H]

 

 

 

 

 

 

Summer RA 2015 (16 May’15 – 31 Aug’15)

Total Hours : (20h*15weeks=300H) – 41H from Spring = 259H 
Accumulated Hours Worked : 251H till 7/31
Remaining Hours : 08H
Vacation : 16H+4H
Carry over to Fall’15 = 20-8 = 12H

07/31/2015 : Worked on refactoring [8H]
07/30/2015 : Worked on refactoring [8H]
07/29/2015 : Worked on refactoring [8H]
07/28/2015 : Ran full dataset [8H]
07/27/2015 : Rab full dataset [8H]
07/24/2015 : Worked on Billings table(primary care) [8H]
07/23/2015 : Worked on Billings table(primary care) [8H]
07/22/2015 : Worked on Billings table(primary care) [8H]
07/21/2015 : Worked on Billings table(primary care) [8H]
07/20/2015 : Worked on Billings table(primary care) [8H]
07/17/2015 : Worked on Billings table(primary care) [8H]————-171
07/16/2015 : Worked on Billings table(charlson index) [8H]
07/15/2015 : Worked on Billings table(charlson index) [8H]
07/14/2015 : Worked on Billings table [8H]
07/13/2015 : Worked on Billings table [8H]
07/10/2015 : Worked on Billings table [8H]
07/09/2015 : Worked on Billings table [9H]
07/08/2015 : Worked on Charlson Comorbidity [8H]
07/07/2015 : Worked on ACSC [8H]
07/06/2015 : Started working on edvisit [8H]
07/03/2015 : Worked on EG68 [8H]
07/02/2015 : Worked on cleaning workspace and regenerating eg68 report [8H]
07/01/2015 : Started working on EG68 [8H]
06/30/2015 : Worked on EG5 AHA data [8H]
06/29/2015 : Worked on EG5 AHA data [8H]
06/26/2015 : Worked on EG5 AHA data [8H]
06/25/2015 : Worked on EG5 AHA data [8H]
06/24/2015 : Worked on EG5 AHA data [10H]
06/23/2015 : Worked on EG5 AHA data [8H]
06/22/2015 : Worked on EG5 AHA data [8H]
06/19/2015 : Worked on EG5 AHA data [8H]

 

 

 

 

 

 

 

Spring RA 2015 (20 Jan’15 – 15 May’15)

Total Hours : 320h (20h*16weeks=320h)
Accumulated Hours Worked : 337 (16 weeks)
Vacation : 24h
Remaining Hours : -(17h+24h) = -41h

05/15/2015: 16th week

Hours Worked This Week: 32
05/15/2015: Worked on EG5 for Interim report [8H]
05/14/2015: Worked on EG5 for Interim report [8H]
05/13/2015: Worked on EG5 for Interim report [8H]
05/12/2015: Worked on EG5 for Interim report [8H]

05/11/2015: 15th week (305h)

Hours Worked This Week: 25
05/11/2015: Worked on EG5 for Interim report [10H]
05/08/2015: Worked on EG5 for Interim report [8H]
05/06/2015: Worked on EG5 for Interim report [7H]

05/04/2015:

Hours Worked This Week: 22
05/04/2015: Worked on EG5 for Interim report [6H]
05/01/2015: Worked on EG5 for Interim report [9H]
04/29/2015: Worked on EG5 for Interim report [7H]

04/27/2015:

Hours Worked This Week: 21
04/27/2015: Worked on EG5 for Interim report [6H]
04/24/2015: Ext Advisory Meeting [8H]
04/22/2015: Worked for Ext Advisory meeting ppts [7H]

04/20/2015:

Hours Worked This Week: 20
04/20/2015: Worked for Ext Advisory meeting ppts [6H]
04/17/2015: Started working on EG5 [7H]
04/15/2015: Worked on ppt for ed claims and demographic  [7H]

04/13/2015:

Hours Worked This Week: 21
04/13/2015: worked on ACS 2013 [7H]
04/10/2015: Inpatient outpatient ed claims continued [7H]
04/08/2015: Worked on ppt for ed claims and demographic  [7H]

04/06/2015:

Hours Worked This Week: 20
04/06/2015: Inpatient outpatient ed claims continued [6H]
04/03/2015: Inpatient outpatient ed claims continued [7H]
04/01/2015: Worked on abstract  [7H]

03/30/2015:

Hours Worked This Week: 20
03/30/2015: Inpatient outpatient ed claims continued. [6H]
03/27/2015: Inpatient outpatient ed claims [7H]
03/25/2015: Understanding Proj2 [7H]

03/23/2015:

Hours Worked This Week: 21 (Spring Break 14-22 Mar)
03/23/2015: Started proj2 [7H]
03/13/2015: Finished ARM_abstract 2015 [7H]
03/11/2015: Working on ARM_abstract [7H]

03/09/2015:

Hours Worked This Week: 20
03/09/2015: Combined site 3,7 and generated site report [6H]
03/06/2015: Continuing on proj1-wave1 extension [7H]
03/04/2015: Continuing on proj1-wave1 extension [7H]

03/02/2015:

Hours Worked This Week: 21
03/02/2015: Understanding wave1 ext [6H]
02/27/2015: SQL assignment 5 [9H]
02/25/2015: Started looking proj1-wave1 extension [6H]

02/23/2015:

Hours Worked This Week: 21
02/23/2015: finished assignment 4 and6 [6H]
02/20/2015: doing assignment 6 – learning macro [9H]
02/18/2015: finished assignment 4 [6H]

02/16/2015:

Hours Worked This Week: 20h
02/16/2015: doing assignment 4 [7H]
02/13/2015: doing assignment 4 [7H]
02/11/2015: finished assignment 3 [6H]

02/09/2015:

Hours Worked This Week: 19h
02/09/2015: finish assignment 2 fully and created NewStudent page[7H]
02/06/2015: Going through assignment 2 and doing goto document and R01 application [6H] 02/04/2015: finish assignment 1 and went through SAS programming [6H]

02/02/2015:

Hours Worked This Week: 14h
Completed CITI training

01/26/2015:

Hours Worked This Week: 20h
Completed HR training and accounts settings, timesheets

01/20/2015:

Started working RAship.

 

Editors in Linux

The most difficult hurtle for many students who start to use Linux is to learn to become proficient in an editor. The editor is how you communicate with the computer, so spending a little time becoming proficient in a power editor is worth your time.

This page has some information about the most common editors. If you are totally new to LINUX then you can use nano (or pico) for simple things to get you going.

Editors

  • nano or pico : for simple editing
    • nano fn.sas
  • emacs : use ESS for sas
  • vim : see below for more information

ESS setup for Emacs users

  1. Use command “ls -a” to check if the “.emacs” file exists.
  2. If the “.emacs” file exist, open the file using “emacs .emacs” command.
  3. In the emacs file, type in ” (load “/opt/HPM/bin/ess-13.09/lisp/ess-site”) ” on the top.
  4. Save the file and exit. (Command “Ctrl + x + s” and “Ctrl + x + c”)
  5. Edit sas file in your directory and check the different color setting in the editor.

old use server

Installation Guide

Connecting to Linux Server

  • Connection configuration (PuTTY & WinSCP & Xming)
  • You will need the IP ADDRESS (66.64.81.149) of the server as you work through the configuration.  Please do not share this IP address with unauthorized users.  It is RESTRICTED information.  Knowing the IP address opens up more potential for attack on the server.  This is why we do not have this information in the pdf document that is more widely accessed.

Using PC SAS to submit jobs to the Linux Server

      • Open the Base SAS application in your PC or laptop
      • At the top of your program type in the following (This is run at the beginning every time you start up base sas)
        %let server = 66.64.81.149 5019;
        options comamid = tcp remote = server;
        signon username = _prompt_;
        run;
      • Submit this small program. It will ask for your login/pwd for the server. Log in.
      • After that. you can put any code you want to run on the server between the following two keywords (rsubmit & rsubmitend)
      • Example (this code should work for everyone. Run this to confirm you are setup correctly.)
        rsubmit;
        libname in “/opt/HPM/usr/kum/data”;
        proc print data=in.test(obs=10);
        rsubmitend;
      • More documentation