[IGS-DCWG-57] Re: [IGS-IC-313] RE: [IGS-DCWG-53] Fwd: Feedback: ZIMM station - daily data unavailability
jim.ray at noaa.gov
jim.ray at noaa.gov
Fri Mar 19 15:15:50 PDT 2010
******************************************************************************
IGS-DCWG Mail 22 Mar 10:11:52 PDT 2010 Message Number 57
******************************************************************************
Author: Jim Ray
Dear Carey and Mike,
To add to your burdens, I bring up an old issue that is somewhat
related that perhaps you can consider also. Namely, some ODCs
and RDCs build 24h RINEX files from smaller incremental data units,
such as 1h files. In doing so, we have persistent and consistent loss
of data, usually hourly chunks. Sometimes the problems are not at
the data center level but at the AC level, for those ACs that accumulate
their own 24h files from smaller units.
So anyway, I urge you to put in checks at the data center level that
verify that the "definitive" 24h files that get archived actually contain
a full 24h of data.
Also, to repeat an old suggestion: The IVS does have robust
(I think) processes that equalize data and product holdings
among their GDCs (which largely equal the IGS GDCs).
Admittedly, the VLBI situation is a lot simpler than for GNSS,
but the concepts are similar. So it might be possible to borrow
(or steal) some processes from them to retool for the IGS.
Just some suggestions to consider,
--Jim
----- Original Message -----
From: "Schmidt, Michael" <Michael.Schmidt at NRCan-RNCan.gc.ca>
Date: Friday, March 19, 2010 3:18 pm
Subject: [IGS-IC-313] RE: [IGS-DCWG-53] Fwd: Feedback: ZIMM station - daily data unavailability
To: Carey Noll <Carey.Noll at nasa.gov>, igs-dcwg at igscb.jpl.nasa.gov
Cc: igs-ic at igscb.jpl.nasa.gov
> ******************************************************************************
> IGS-IC Mail 19 Mar 12:18:31 PDT 2010 Message Number 313
> ******************************************************************************
>
> Hi Carey
>
> Thanks for raising this - this is a good time to be discussing the
> robustness of data distribution / availability so that we can perhaps
> arrive at some firm proposals in Newcastle.
>
> So to follow up from our conversation and in response to the emails
> below and from my reading of the position paper it would appear that the
> main issues are:
>
> 1) Robust distribution of data from Operational Data Centers (ODC) to
> Global Data Centers (GDC);
> 2) Robust updating of data when resubmissions are necessary;
> 3) synchronicity between GDC's
> 4) notification of resubmissions
>
> In the context of this discussion, the term ODC would include all those
> agencies / sites currently pushing data from one or more sites to one
> or
> more GDC's.
>
> If I understand you and others correctly, there is currently no
> mirroring or synchronization between GDC's. As we all know this can lead
> to one or a combination of the following:
> - incomplete data holdings;
> - incomplete / incorrect data holdings when taking
> into account data resubmissions;
>
>
> Addressing items 1-3:
> =====================
> To start to resolve this can we consider the following:
>
> (A) ODC's push data to all GDC's;
> (B) ODC's push resubmissions to all GDC's;
>
> - A+B would/should address issues 1, 2 and 3 above except in those
> instance when the availability of or communication to a GDC is
> compromised, in which case there may be a latency.
> - A simple system of book keeping at each GDC that could be interrogated
> electronically might be desirable (ties into the notification issue).
>
> I recognize that not all ODC's can necessarily meet this suggestion, but
> would hazard the guess that most could (TBC).
>
>
> Addressing item 4:
> ==================
> (notification of re-submissions)
> Currently this is handled by issuing Advisories through the IGSSTATION
> list. It is not clear to me that this is the most effective method for
> a
> number of reasons:
> I) latency in receiving information;
> -in other words by the time the email is sent and
> acted upon by the user(s)it may be too late to be
> effective;
> - from the perspective of the IGS the primary users
> are the Analysis Centers but at the same time
> recognizing that there is a large user group
> outside the IGS.
> II) incompleteness; I'll be the first to admit that we have not always
> been consistent in sending out these notices; I suspect we are not
> alone.
> - other?
>
> So perhaps the best place to start is by defining / examining:
> - who needs notifications?
> - design latency criteria of notifications;
> - determine best method(s) for distribution;
> - consider electronic book keeping method that can be interrogated by
> automated processes;
>
> It would seem that as we become more and more reliant on automated
> processes, that automatic detection of resubmissions would be desirable.
> As a start one could envision automated emails, as part of the
> resubmission process by the ODC to a specific distribution list
> consisting of the primary users of IGS data files. These emails could,
> for example, be in a specific format, thus machine readable and
> actionable (??).
>
> At any rate I think this is a good discussion to be having now and
> hopefully moving forward even in incremental steps.
>
> Thanks again
>
> Cheers
>
> Mike
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: owner-igs-dcwg at igscb.jpl.nasa.gov
> [ On Behalf Of Carey Noll
> Sent: Tuesday, March 16, 2010 7:25 AM
> To: igs-dcwg at igscb.jpl.nasa.gov
> Cc: igs-ic at igscb.jpl.nasa.gov; Carey Noll
> Subject: [IGS-DCWG-53] Fwd: Feedback: ZIMM station - daily data
> unavailability
>
> ************************************************************************
> ******
> IGS-DCWG Mail 16 Mar 07:23:50 PDT 2010 Message Number 53
> ************************************************************************
> ******
>
> Author: Carey Noll/CDDIS
>
> Our group has not been very active in the last year or so.
> I would like to begin discussions again on how we can best
> serve our user communities and the IGS by ensuring that data
> continue to flow through various channels despite outages of
> a data center, at any level in the hierarchy (global, regional,
> or operational). We have had several instances recently where
> a data center was down and some data were not made available
> because of the outage. Also, there are instances as shown
> below where files are delivered but not propagated through the
> data center chain.
>
> If you recall, we had a detailed plan presented during
> the Darmstadt workshop:
>
> 20Papers%20PDF/5_Bruyninx_posppr_proceedings_final.pdf
> Following the workshop, efforts to generate an implementation
> plan did not get off the ground. I would like to ask the group
> to revisit this plan and come up with ideas on how to implement
> the proposed steps, or a subset of them, in order to provide a
> more redundant, robust data flow. We can entertain new ideas as
> well. It is important for the data centers, working with the IGS
> Infrastructure Committee, to address this issue.
>
> My goal is to have clear steps in mind prior to July's IGS
> Workshop; there will be a data center/data flow presentation that
> would be the best place to outline concrete steps for solving our
> problems. I have also proposed that we hold a DCWG splinter
> meeting at the workshop. Any suggestions on other topics to be
> addressed during that meeting are welcome.
>
> Regards,
> Carey.
> -----
> Begin forwarded message:
>
> > From: Robert Khachikyan <robertk at jpl.nasa.gov>
> > Date: March 15, 2010 1:17:01 PM EDT
> > To: G|nter Stangl <guenter.stangl at oeaw.ac.at>
> > Cc: Brockmann Elmar LT <Elmar.Brockmann at swisstopo.ch>,
> > "'epncb at oma.be'" <epncb at oma.be>, IGS Central Bureau
> <igscb at igscb.jpl.nasa.gov
> > >, Habrich Heinz <heinz.habrich at bkg.bund.de>, "Noll, Carey E.
> > (GSFC-6901)" <carey.e.noll at nasa.gov>
> > Subject: Re: Feedback: ZIMM station - daily data unavailability
> >
> > Just an FYI, there might be a discussion going on with the IGS Data
> > Centers to have all IGS data sync'ed. Carey might have additional
> > information.
> >
> > Best Regards,
> > --Robert Khachikyan
> > IGS Central Bureau
> >
> > On 3/15/2010 6:02 AM, G|nter Stangl wrote:
> >> Brockmann Elmar LT schrieb:
> >>
> >>> Dear Carine, Dear Dominique, Dear Guenter,
> >>>
> >>> We got the mail below, indicating that the daily data of ZIMM are
> >>> not available at DOY 067, 071, 073 at data center OLG.
> >>>
> >>> I checked the upload protocol and saw, that the uploads were
> >>> successfull to 6 DC (see below). I assume, that some internal
> >>> problems at OLG are responsible for the missing data of ZIMM. By
> >>> the way - the data of twin station ZIM2 are listed as available.
> >>> There were also quite some "gaps" in the OLG list, so probably
> >>> many station managers got a similar mail.
> >>>
> >>> You suggested, to "Please upload the missing data or, if
> >>> applicable, send a EUREF mail to notify the EUREF community of the
> >>> origin of the
> >>> data gap."
> >>> We do a "re-sent" in an automated way (5 trails very full hour).
> >>> If data could not be delivered within 5 hours, we don't do a re-
> >>> sending of data.
> >>>
> >>> I'm a little confused that you asked for re-uploading data. If the
> >>> data are missing for any reason at one of the two European data
> >>> centers (BKG, OLG) and in case of ZIMM even at the global DCs SIO,
> >>> CDDIS, and IGN, I would expect, that the data centers will
> >>> syncronisze the data. That's the reason for giving a personal
> >>> feedback to the probably automatically generated mail.
> >>>
> >>> Greetings from Berne,
> >>> Elmar
> >>>
> >>>
> >>>
> >>> Elmar Brockmann
> >>> swisstopo
> >>> Federal Office of Topography
> >>> Geodesy
> >>> Seftigenstrasse 264
> >>> CH-3084 Wabern
> >>> Phone +41 31 963 21 11 operator
> >>> +41 31 963 22 56 direct
> >>> Fax +41 31 963 24 59
> >>> elmar.brockmann at swisstopo.ch
> >>> www.swisstopo.ch
> >>>
> >>>
> >>>
> >>> ----------
> >>>
> >>>
> >>> 067:
> >>> zimm0670.10d.Z : was put to BKG at 09-MAR-2010 00:04:11
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to BKG at 09-MAR-2010 00:04:17
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to BKG at 09-MAR-2010 00:04:17
> >>> with size 893
> >>> zimm0670.10d.Z : was put to SIO at 09-MAR-2010 00:05:09
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to SIO at 09-MAR-2010 00:05:16
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to SIO at 09-MAR-2010 00:05:18
> >>> with size 893
> >>> zimm0670.10d.Z : was put to OLG at 09-MAR-2010 00:06:17
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to OLG at 09-MAR-2010 00:06:24
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to OLG at 09-MAR-2010 00:06:25
> >>> with size 893
> >>> zimm0670.10d.Z : was put to AIUB at 09-MAR-2010 00:07:59
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to AIUB at 09-MAR-2010 00:08:09
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to AIUB at 09-MAR-2010 00:08:10
> >>> with size 893
> >>> zimm0670.10d.Z : was put to CDDIS at 09-MAR-2010 00:09:34
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to CDDIS at 09-MAR-2010 00:09:41
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to CDDIS at 09-MAR-2010 00:09:42
> >>> with size 893
> >>> zimm0670.10d.Z : was put to IGN at 09-MAR-2010 00:10:41
> >>> with size 335377
> >>> zimm0670.10n.Z : was put to IGN at 09-MAR-2010 00:10:49
> >>> with size 33849
> >>> zimm0670.10m.Z : was put to IGN at 09-MAR-2010 00:10:50
> >>> with size 893
> >>>
> >>>
> >>> 071:
> >>> zimm0710.10d.Z : was put to BKG at 13-MAR-2010 00:04:05
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to BKG at 13-MAR-2010 00:04:12
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to BKG at 13-MAR-2010 00:04:14
> >>> with size 859
> >>> zimm0710.10d.Z : was put to SIO at 13-MAR-2010 00:05:08
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to SIO at 13-MAR-2010 00:05:15
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to SIO at 13-MAR-2010 00:05:16
> >>> with size 859
> >>> zimm0710.10d.Z : was put to OLG at 13-MAR-2010 00:06:15
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to OLG at 13-MAR-2010 00:06:21
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to OLG at 13-MAR-2010 00:06:22
> >>> with size 859
> >>> zimm0710.10d.Z : was put to AIUB at 13-MAR-2010 00:07:51
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to AIUB at 13-MAR-2010 00:08:03
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to AIUB at 13-MAR-2010 00:08:04
> >>> with size 859
> >>> zimm0710.10d.Z : was put to CDDIS at 13-MAR-2010 00:09:26
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to CDDIS at 13-MAR-2010 00:09:34
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to CDDIS at 13-MAR-2010 00:09:36
> >>> with size 859
> >>> zimm0710.10d.Z : was put to IGN at 13-MAR-2010 00:10:41
> >>> with size 334629
> >>> zimm0710.10n.Z : was put to IGN at 13-MAR-2010 00:10:50
> >>> with size 34818
> >>> zimm0710.10m.Z : was put to IGN at 13-MAR-2010 00:10:52
> >>> with size 859
> >>>
> >>> 073:
> >>> zimm0730.10d.Z : was put to BKG at 15-MAR-2010 00:04:05
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to BKG at 15-MAR-2010 00:04:11
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to BKG at 15-MAR-2010 00:04:14
> >>> with size 876
> >>> zimm0730.10d.Z : was put to SIO at 15-MAR-2010 00:05:08
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to SIO at 15-MAR-2010 00:05:16
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to SIO at 15-MAR-2010 00:05:17
> >>> with size 876
> >>> zimm0730.10d.Z : was put to OLG at 15-MAR-2010 00:06:16
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to OLG at 15-MAR-2010 00:06:23
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to OLG at 15-MAR-2010 00:06:24
> >>> with size 876
> >>> zimm0730.10d.Z : was put to AIUB at 15-MAR-2010 00:08:01
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to AIUB at 15-MAR-2010 00:08:12
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to AIUB at 15-MAR-2010 00:08:13
> >>> with size 876
> >>> zimm0730.10d.Z : was put to CDDIS at 15-MAR-2010 00:09:33
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to CDDIS at 15-MAR-2010 00:09:42
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to CDDIS at 15-MAR-2010 00:09:43
> >>> with size 876
> >>> zimm0730.10d.Z : was put to IGN at 15-MAR-2010 00:10:42
> >>> with size 334959
> >>> zimm0730.10n.Z : was put to IGN at 15-MAR-2010 00:10:51
> >>> with size 34757
> >>> zimm0730.10m.Z : was put to IGN at 15-MAR-2010 00:10:52
> >>> with size 876
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>> -----Urspr|ngliche Nachricht-----
> >>>> Von: epncb at oma.be [
> >>>> Gesendet: Montag, 15. Mdrz 2010 09:09
> >>>> An: epncb at oma.be; Brockmann Elmar LT; Wild Urs LT;
> >>>> robertk at jpl.nasa.gov
> >>>> Betreff: ZIMM station - daily data unavailability
> >>>>
> >>>>
> >>>> Dear colleagues,
> >>>>
> >>>> All daily data of the EPN stations should be uploaded to both
> >>>> the OLG and BKG regional data centres.
> >>>>
> >>>> We noticed that from 067/2010 to 073/2010, the daily data of
> >>>> the station ZIMM are incomplete. Missing are: at OLG: DOY
> >>>> 067/2010, 071/2010, 073/2010
> >>>>
> >>>> Please upload the missing data or, if applicable, send a
> >>>> EUREF mail to notify the EUREF community of the origin of the
> >>>> data gap. Consult
> >>>> and
> >>>> for data
> >>>> upload instructions.
> >>>>
> >>>> Detailed daily data holding information is available from BKG and
> >>>> OLG:
> >>>> BKG: or ...
> >>>>
> >>>> OLG:
> >>>>
> >>>> Best regards,
> >>>>
> >>>> EPN Central Bureau / Carine and Dominique
> >>>> epncb at oma.be
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >> Dear colleagues,
> >>
> >> as you can see from the attachment the daily files of ZIMM were
> >> corrupt
> >> at DOY 067, 071 and 073 (obs files only, not nav and met files).
> >>
> >> Concerning the general statement of Elmar, synchronization between
> >> BKG
> >> and OLG was done until some years ago, but stopped because BKG
> >> complained a too heavy loading of requests (OLG used wget). We can
> >> discuss this again, perhaps using a time window of synchronization
> >> (three months?).
> >>
> >> kind regards
> >>
> >> Guenter
> >>
>
> -----
> Ms. Carey Noll
> Manager, Crustal Dynamics Data Information System (CDDIS)
> Secretary, ILRS Central Bureau
> NASA GSFC
> Code 690.1
> Greenbelt, MD 20771
> USA
>
> E-mail: Carey.Noll at nasa.gov
> Voice: (301) 614-6542
> Fax: (301) 614-6015
> WWW:
More information about the IGS-DCWG
mailing list