Thanks, nooow it's clear: I left Graz on Friday at dawn, skipping the last day.
If you get the opportunity, do consider publishing in the proceedings. I think jTEI has pretty good ratings, and a good outreach in the DH community.
On 06/11/2019 21:55, Francisco Mondaca wrote:
My colleague Jan Bigalke and I presented Kosh, not as a poster, but as
a short paper under the title: 'Introducing an Open, Dynamic and
Efficient Lexical Data Access for TEI-encoded Dictionaries on the
Internet' (Friday, 20/Sep/2019 1:30-3:00). I was not clear enough in my
last email about the type of presentation at Graz, sorry.
I hope to attend a TEI conference (and a LingSIG meeting) soon. If it
is Europe that would make it easier, at least for me now.
On Wed, 2019-11-06 at 15:29 +0100, Piotr Bański wrote:
Thanks for the reply, which I have somehow overlooked until now. I
understand how I managed not to talk to you in Graz (OTOH, for some
reason, I recall I only zipped very quickly through the poster
It's a pity we didn't have you with us at the LingSIG meeting, but
was probably held too early for many of the conference attendees
meetings took place before the conference proper). Next time (on the
On 01/11/2019 20:48, Francisco Mondaca wrote:
Super news, thanks!
You are welcome! I am always happy to help if I can.
Your setup for the Rigveda sounds wicked, and the interface is so
have you thought of submitting a paper or poster to the upcoming
conference? (Unfortunately, it's going to take place across the
in 2020, in Lincoln, Nebraska). The submission deadlines are
Thanks! My colleague Borge Kiss (cc) is responsible for the App. He
also the developer behind the VedaWeb REST API:
We did present the project already, but as posters: DHD2018
EADH2018 Galway. Indirectly with a short poster at DH2019 Utrecht.
presented Kosh in the last TEI conference in Graz (
https://docs.google.com/spreadsheets/d/1xZIoE90QOESVy85ECYasWk8lGov1unWT-yeWB5jtZNg/edit#gid=0). I am afraid that I will not attend the next TEI conference. The
months I need to focus on a project and I wil try to stay in
Is your glossing solution also TEI-based?
Yes. We have our TEI data here:
https://github.com/cceh/c-salt_vedaweb_tei. There are some things
missing from the files, e.g. TEI headers and metrical data. So, the
data that you see in the App should be almost the same that you see
those files. We are still doing some changes with the data, so this
not the last version of our TEI files. I am thankful for any
for improving the structure.
I recall a recent wave of
discussions on glossing on the TEI mailing list, it's a hot topic
Thanks for the advice, I just joined out the mailing list (TEI-
LINGUISTICS and public dicussion). Until now I have had a glimpse
the TEIC/TEI GitHub repo.
On 10/29/19 2:54 PM, Francisco Mondaca wrote:
Is there any chance that you would set things up and
FreeDict? At freedict.org, there should be space left for
would be more than happy to assist.
Yes, I could take care of it. For the moment we have 82
running on a virtual machine at the University of Cologne. Here
of the current freedict APIs:
The setup that I would recommend would be to deploy Kosh with
We also have a container (https://github.com/cceh/kosh_sync)
executes pull requests from a git repo at a certain time lapse.
instance, we could set it up, to 'pull' each 24 hours from the
repo. As Kosh reindexes each dataset (dict), if it notices that
related files are modified, the whole process of updating the
in the freedict server and the reindexation of the entries
I think that a after all the JSON configuration files
detailed description of the respective datasets (of each
the APIs could be employed for multiple purposes.
What other purposes other than word lookup would come to
We use Kosh for our Sanskrit dictionaries encoded in TEI:
In the VedaWeb project (https://vedaweb.uni-koeln.de/rigveda)
each token of the Rigveda, a Vedic text, with entries from a
that was specially compiled for it (Grassmann's dictionary).
dictionary entries that you see in the bottom of each text are
by the Grassmann GraphQL API. If you click in one of the
can also see its related scanned image.
In this case, we did save the ID of the entry at a
We saved it in the TEI file where we modelled the Rigveda with
versions, annotations and translations. Later, we saved this
information in mongoDB and indexed it in elasticsearch.
The web applicaton that you see calls a REST API (
texts associated with a stanza, and the GraphQL Grassmann API (
lemmata for each entry. We don't execute a 'proper' lookup
not search. We just get the entry from the API based on its ID.