Dialectology and the Gospels

Since starting my research of the Gospels for my dissertation, I have repeatedly wondered (as I idly mused earlier) if there have been any attempts to identify where the Gospels may have originated/developed based upon dialectal considerations. As I run across patterns such as Matthew’s preference for plural nouns and lexical issues such as synonym substitution that by all appearances don’t significantly influence thematic or other conscious stylistic differences, I automatically think dialect, although of course idiolect variation occurs within a single dialect. This is contingent, of course, on being able to identify the place of origin for other texts with which they may be compared, so I recognize it’s a tall order. I assume there are plenty of guesses about where certain Gospels (John, for instance) originated based upon other considerations.

I imagine that narrowing down geographical areas in which the texts (or their authors) might have originated and developed has the potential to influence our understanding of the issues related to the transmission and composition of the traditions/texts of the Gospels.

I’d like to ask anyone who reads this blog and is informed about these issues: how have they been treated in the literature? And if you aren’t personally aware, do you think you could refer me to someone who might be? I’d certainly appreciate it!

Tagged with:
Recent Posts:
  • Matt

    Stephen, it sounds like a problem best solved through software. So, perhaps, the base question is … has anyone made a database of these features in a format that one could query against to get the answers you need.

    • Unfortunately, a lot of the more subtle linguistic analysis must be done by hand. But I did think of how useful it might be, provided all the relevant texts were digitized and searchable (not the case, I’d guess), if the frequency of certain lexical tokens at least could be calculated within each text. The results would then, of course, have to be analyzed by a warm body.

    • Matt

      So, what if any resources do you know about? I understand your point, but seems worth seeing what you could squeeze out to help with manual investigation.

      • I really don’t know what’s available right now. I might well be wrong about the availability of such resources — “certain” would be too strong a word, really. Of course, as it stands I’m not embarking on this. I’d just be interested enough to see if someone else has already done it!

        By the way, welcome to my blog! I hope you stick around. 😉

  • Matt

    Stephen, it sounds like a problem best solved through software. So, perhaps, the base question is … has anyone made a database of these features in a format that one could query against to get the answers you need.

    • Unfortunately, a lot of the more subtle linguistic analysis must be done by hand. But I did think of how useful it might be, provided all the relevant texts were digitized and searchable (not the case, I’d guess), if the frequency of certain lexical tokens at least could be calculated within each text. The results would then, of course, have to be analyzed by a warm body.

    • Matt

      So, what if any resources do you know about? I understand your point, but seems worth seeing what you could squeeze out to help with manual investigation.

      • I really don’t know what’s available right now. I might well be wrong about the availability of such resources — “certain” would be too strong a word, really. Of course, as it stands I’m not embarking on this. I’d just be interested enough to see if someone else has already done it!

        By the way, welcome to my blog! I hope you stick around. 😉

  • Matt

    hehehe I’ve been lurking too long 😉

  • Matt

    hehehe I’ve been lurking too long 😉