list of publications - collegium.ethz.ch · physical review a, vol. 40, pp. 4145{4148, 1989. [6] d....

23
List of Publications Joachim M. Buhmann Eidgen¨ ossische Technische Hochschule Z¨ urich Departement Informatik, Institut f¨ ur Maschinelles Lernen Universit¨ atstrasse 6 CH-8092 Z¨ urich, Switzerland [email protected] March 25, 2018 Articles in Peer Reviewed Journals (2011-2016) [1] J. M. Buhmann and K. Schulten, “Associative recognition and storage in a model network of physiological neurons,” Biological Cybernetics, vol. 54, pp. 319–335, 1986. [2] J. M. Buhmann and K. Schulten, “Influence of noise on the function of a “physiological” neural network,” Biological Cybernetics, vol. 56, pp. 313–327, 1987. [3] J. M. Buhmann and K. Schulten, “Noise–driven temporal association in neural networks,” Europhysics Letters, vol. 4, pp. 1205–1209, 1987. [4] J. M. Buhmann, R. Divko, and K. Schulten, “Associative memory with high information content,” Physical Review A, vol. 39, pp. 2689–2692, 1989. [5] J. M. Buhmann, “Oscillations and low firing rates in associative memory neural networks,” Physical Review A, vol. 40, pp. 4145–4148, 1989. [6] D. Wang, J. M. Buhmann, and C. von der Malsburg, “Pattern segmentation in associative memory,” Neural Computation, vol. 2, pp. 94–106, 1990. [7] C. von der Malsburg and J. M. Buhmann, “Sensory segmentation with coupled neural oscilla- tors,” Biological Cybernetics, vol. 67, pp. 233–242, 1992. [8] M. A. Arbib and J. M. Buhmann, “Neural networks,” in Encyclopedia of Artificial Intelligence (S. C. Shapiro, ed.), vol. 2, (New York, NY, USA), pp. 1016–1060, John Wiley, 1992. [9] M. Lades, J. C. Vorbr¨ uggen, J. M. Buhmann, J. Lange, C. von der Malsburg, R. P. W¨ urtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Transactions on Computers, vol. 42, pp. 300–311, 1993. [10] J. M. Buhmann and H. K¨ uhnel, “Complexity optimized data clustering by competitive neural networks,” Neural Computation, vol. 5, pp. 75–88, 1993. [11] J. M. Buhmann and H. K¨ uhnel, “Vector quantization with complexity costs,” IEEE Transac- tions on Information Theory, vol. 39, pp. 1133–1145, July 1993. [12] J. M. Buhmann, W. Burgard, A. B. Cremers, D. Fox, T. Hofmann, F. Schneider, J. Strikos, and S. Thrun, “The mobile robot rhino,” AI Magazine, vol. 16, no. 2, pp. 31–38, 1995. [13] T. Hofmann and J. M. Buhmann, “Pairwise data clustering by deterministic annealing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 1, pp. 1–14, 1997. 1

Upload: vanbao

Post on 20-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

List of Publications

Joachim M. BuhmannEidgenossische Technische Hochschule Zurich

Departement Informatik, Institut fur Maschinelles LernenUniversitatstrasse 6

CH-8092 Zurich, [email protected]

March 25, 2018

Articles in Peer Reviewed Journals (2011-2016)

[1] J. M. Buhmann and K. Schulten, “Associative recognition and storage in a model network ofphysiological neurons,” Biological Cybernetics, vol. 54, pp. 319–335, 1986.

[2] J. M. Buhmann and K. Schulten, “Influence of noise on the function of a “physiological” neuralnetwork,” Biological Cybernetics, vol. 56, pp. 313–327, 1987.

[3] J. M. Buhmann and K. Schulten, “Noise–driven temporal association in neural networks,”Europhysics Letters, vol. 4, pp. 1205–1209, 1987.

[4] J. M. Buhmann, R. Divko, and K. Schulten, “Associative memory with high informationcontent,” Physical Review A, vol. 39, pp. 2689–2692, 1989.

[5] J. M. Buhmann, “Oscillations and low firing rates in associative memory neural networks,”Physical Review A, vol. 40, pp. 4145–4148, 1989.

[6] D. Wang, J. M. Buhmann, and C. von der Malsburg, “Pattern segmentation in associativememory,” Neural Computation, vol. 2, pp. 94–106, 1990.

[7] C. von der Malsburg and J. M. Buhmann, “Sensory segmentation with coupled neural oscilla-tors,” Biological Cybernetics, vol. 67, pp. 233–242, 1992.

[8] M. A. Arbib and J. M. Buhmann, “Neural networks,” in Encyclopedia of Artificial Intelligence(S. C. Shapiro, ed.), vol. 2, (New York, NY, USA), pp. 1016–1060, John Wiley, 1992.

[9] M. Lades, J. C. Vorbruggen, J. M. Buhmann, J. Lange, C. von der Malsburg, R. P. Wurtz, andW. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEETransactions on Computers, vol. 42, pp. 300–311, 1993.

[10] J. M. Buhmann and H. Kuhnel, “Complexity optimized data clustering by competitive neuralnetworks,” Neural Computation, vol. 5, pp. 75–88, 1993.

[11] J. M. Buhmann and H. Kuhnel, “Vector quantization with complexity costs,” IEEE Transac-tions on Information Theory, vol. 39, pp. 1133–1145, July 1993.

[12] J. M. Buhmann, W. Burgard, A. B. Cremers, D. Fox, T. Hofmann, F. Schneider, J. Strikos,and S. Thrun, “The mobile robot rhino,” AI Magazine, vol. 16, no. 2, pp. 31–38, 1995.

[13] T. Hofmann and J. M. Buhmann, “Pairwise data clustering by deterministic annealing,” IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 1, pp. 1–14, 1997.

1

[14] T. Hofmann and J. M. Buhmann, “Competitive learning algorithms for robust vector quanti-zation,” IEEE Transactions on Signal Processing, vol. 46, no. 6, pp. 1665–1675, 1998.

[15] T. Hofmann, J. Puzicha, and J. M. Buhmann, “Unsupervised texture segmentation in a de-terministic annealing framework,” IEEE Transactions on Pattern Analysis and Machine In-telligence, vol. 20, no. 8, pp. 803–818, 1998.

[16] J. M. Buhmann, D. Fellner, M. Held, J. Ketterer, and J. Puzicha, “Dithered color quantiza-tion,” Computer Graphics Forum, vol. 17, pp. C–219 – C–231, Sept. 1998.

[17] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Histogram clustering for unsupervised seg-mentation and image retrieval,” Pattern Recognition Letters, vol. 20, pp. 899–909, 9 1999.

[18] J. M. Buhmann, J. Malik, and P. Perona, “Image recognition: Visual grouping, recognitionand learning,” Proc. Nat. Acad. Science USA, vol. 96, pp. 14203–14204, 12 1999.

[19] J. Puzicha and J. M. Buhmann, “Multiscale annealing for grouping and unsupervised texturesegmentation,” Computer Vision and Image Understanding, vol. 76, pp. 213–230, 12 1999.

[20] J. Puzicha, M. Held, J. Ketterer, J. M. Buhmann, and D. Fellner, “On spatial quantization ofcolor images,” IEEE Transactions on Image Processing, vol. 9, pp. 666–682, 4 2000.

[21] H. Klock and J. M. Buhmann, “Data visualization by multidimensional scaling: A determin-istic annealing approach,” Pattern Recognition, vol. 33, pp. 651–669, 2000.

[22] J. Puzicha, T. Hofmann, and J. M. Buhmann, “A theory of proximity based clustering: Struc-ture detection by optimization,” Pattern Recognition, vol. 33, pp. 617–634, 2000.

[23] Y. Bengio, J. M. Buhmann, M. Embrechts, and J. Zurada, “Introduction to the special issueon neural networks for data mining and knowledge discovery,” IEEE Transactions on NeuralNetworks, vol. 11, pp. 545–549, 5 2000.

[24] Y. Rubner, J. Puzicha, C. Tomasi, and J. M. Buhmann, “Empirical evaluation of dissimilaritymeasures for color and texture,” Computer Vision and Image Understanding, vol. 84, pp. 25–43, 2001.

[25] Z. Marx, I. Dagan, J. M. Buhmann, and E. Shamir, “Coupled clustering: a method for detect-ing structural correspondence,” Journal of Machine Learning Research, vol. 3, pp. 747–780,Dec 2002.

[26] B. Fischer and J. M. Buhmann, “Path based clustering for grouping of smooth curves andtexture segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 25, pp. 513–518, April 2003.

[27] B. Fischer and J. M. Buhmann, “Bagging for path-based clustering,” IEEE Transactions onPattern Analysis and Machine Intelligence, vol. 25, pp. 1411–1415, November 2003.

[28] V. Roth, J. Laub, M. Kawanabe, and J. M. Buhmann, “Optimal cluster preserving embed-ding of non-metric proximity data,” IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 25, pp. 1540–1551, December 2003.

[29] L. Hermes and J. M. Buhmann, “A minimum entropy approach to adaptive image polygoniza-tion,” IEEE Transactions on Image Processing, vol. 12, pp. 1243 – 1258, October 2003.

2

[30] T. Lange, M. Braun, V. Roth, and J. M. Buhmann, “Stability-based validation of clusteringsolutions,” Neural Computation, vol. 16, pp. 1299–1323, June 2004.

[31] L. Hermes and J. M. Buhmann, “Boundary-constrained agglomerative segmentation,” IEEETransactions on Geoscience and Remote Sensing, vol. 42, pp. 1984–1995, September 2004.

[32] J. M. Buhmann, T. Lange, and U. Ramacher, “Image segmentation by networks of spikingneurons,” Neural Computation, vol. 17, no. 5, pp. 1010–1031, 2005.

[33] B. Fischer, V. Roth, F. Roos, J. Grossmann, S. Baginsky, P. Widmayer, W. Gruissem, andJ. M. Buhmann, “NovoHMM: A hidden Markov model for de novo peptide sequencing,” An-alytical Chemistry, vol. 77, no. 22, pp. 7265 – 7273, 2005.

[34] J. Laub, V. Roth, J. M. Buhmann, and K.-R. Muller, “On the information and representationof non-euclidean pairwise data,” Pattern Recognition, vol. 39, pp. 1815–1826, 2006.

[35] B. Fischer, J. Grossmann, S. Baginsky, V. Roth, W. Gruissem, and J. M. Buhmann, “Semi-supervised LC/MS alignment for differential proteomics,” Bioinformatics, vol. 22, pp. e132–e140, 2006.

[36] T. Zoller and J. M. Buhmann, “Robust image segmentation using resampling and shape con-straints,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, pp. 1147–1164, July 2007.

[37] P. Orbanz and J. M. Buhmann, “Nonparametric Bayesian image segmentation,” InternationalJournal of Computer Vision, vol. 77, pp. 25–44, 2008.

[38] B. Fischer, V. Roth, and J. Buhmann, “Time-series alignment by non-negative multiple gen-eralized canonical correlation analysis,” BMC Bioinformatics, vol. 8, no. S10, p. S4, 2007.

[39] F. Roos, R. Jacob, J. Grossmann, B. Fischer, J. M. Buhmann, W. Gruissem, S. Baginsky, andP. Widmayer, “Pepsplice: Cache-efficient search algorithms for comprehensive identificationof tandem mass spectra,” Bioinformatics, vol. 23, pp. 3016–3023, 2007.

[40] J. Grossmann, B. Fischer, K. Barenfaller, J. Owitil, J. M. Buhmann, W. Gruissem, and S. Ba-ginsky, “A workflow to increase the detection rate of proteins from un-sequenced organismsinhigh-throughput proteomics experiments,” Proteomics, vol. 7, no. 23, pp. 4245 – 4254, 2007.

[41] B. Fischer, V. Roth, and J. M. Buhmann, “Adaptive bandwith selection for biomarker discov-ery in mass spectrometry,” Artificial Intelligence in Medicine, vol. 45, pp. 207–214, 2008.

[42] M. Braun, J. M. Buhmann, and K.-R. Muller, “On relevant dimensions in kernel featurespaces,” Journal of Machine Learning Research, vol. 9, pp. 1875 – 1908, 2008.

[43] B. Ommer and J. M. Buhmann, “Seeing the objects behind the dots: Recognition in videosfrom a moving camera,” International Journal of Computer Vision, vol. 83, pp. 57–71, 2009.

[44] M. Claassen, R. Aebersold, and J. M. Buhmann, “Proteome coverage prediction with infinitemarkov models,” Bioinformatics, vol. 25, no. 12, pp. i154–160, 2009. (“Ian Lawson van Toch”Outstanding Student Paper Award).

3

[45] L. Reiter, M. Claassen, S. P. Schrimpf, M. Jovanovic, A. Schmidt, J. M. Buhmann, M. O.Hengartner, and R. Aebersold, “Protein identification false discovery rates for very large pro-teomics datasets generated by tandem mass spectrometry,” Molecular & Cellular Proteomics,vol. 8, no. 11, pp. 2405–2417, 2009.

[46] P. J. Wild, T. J. Fuchs, R. Stoehr, D. Zimmermann, S. Frigerio, B. Padberg, I. Steiner,E. C. Zwarthoff, M. Burger, S. Denzinger, F. Hofstaedter, G. Kristiansen, T. Hermanns, H.-H.Seifert, M. Provenzano, T. Sulser, V. Roth, J. M. Buhmann, H. Moch, and A. Hartmann,“Detection of Urothelial Bladder Cancer Cells in Voided Urine Can Be Improved by a Combi-nation of Cytology and Standardized Microsatellite Analysis,” Cancer Epidemiol BiomarkersPrev, vol. 18, no. 6, pp. 1798–1806, 2009.

[47] B. Ommer and J. M. Buhmann, “Learning the compositional nature of visual object categoriesfor recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32,no. 3, pp. 501–516, 2010.

[48] V. Kaynig, B. Fischer, E. Muller, and J. M. Buhmann, “Fully automatic stitching and dis-tortion correction of transmission electron microscope images,” Journal of Structural Biology,vol. 171, no. 2, pp. 163–173, 2010.

[49] S. Raman, T. Fuchs, P. Wild, E. Dahl, J. Buhmann, and V. Roth, “Infinite mixture-of-expertsmodel for sparse survival regression with application to breast cancer,” BMC Bioinformatics,vol. 11, no. Suppl 8, p. S8, 2010.

[50] I. Cima, R. Schiess, P. Wild, M. Kaelin, P. Schuffler, V. Lange, P. Picotti, R. Ossola, A. Tem-pleton, O. Schubert, T. Fuchs, T. Leippold, S. Wyler, J. Zehetner, W. Jochum, J. M. Buhmann,T. Cerny, H. Moch, S. Gillessen, R. Aebersold, and W. Krek, “Cancer genetics-guided discov-ery of serum biomarker signatures for diagnosis and prognosis of prostate cancer,” Proceedingsof the National Academy of Sciences USA, vol. 108, no. 8, pp. 3342–3347, 2011.

[51] K. H. Brodersen, F. Haiss, C. S. Ong, F. Jung, M. Tittgemeyer, J. M. Buhmann, B. Weber,and K. E. Stephan, “Model-based feature construction for multivariate decoding,” NeuroImage,vol. 56, no. 2, pp. 601–615, 2011.

[52] K. H. Brodersen, T. M. Schofield, A. P. Leff, C. S. Ong, E. I. Lomakina, J. M. Buhmann, andK. E. Stephan, “Generative embedding for Model-Based classification of fMRI data,” PLoSComput Biol, vol. 7, no. 6, p. e1002079, 2011.

[53] T. J. Fuchs and J. M. Buhmann, “Computational pathology: Challenges and promises fortissue analysis,” Computerized Medical Imaging and Graphics, vol. 35, no. 7-8, pp. 515–530,2011.

[54] C. Sigg, T. Dikk, and J. M. Buhmann, “Speech enhancement using generative dictionarylearning,” IEEE Tr. Acoustics Speech and Language Processing, vol. 20, no. 6, pp. 1698 –1712,2012.

[55] M. Frank, A. P. Streich, D. Basin, and J. M. Buhmann, “Multi-assignment clustering forBoolean data,” Journal of Machine Learning Research, vol. 13, pp. 459–489, Mar 2012.

[56] Q. Zhong, A. G. Busetto, J. P. Fededa, J. M. Buhmann, and D. W. Gerlich, “Unsupervisedmodeling of cell morphology dynamics for time-lapse microscopy,” Nature Methods, vol. 9,no. 7, pp. 711–713, 2012.

4

[57] K. H. Brodersen, C. Mathys, J. R. Chumbley, J. Daunizeau, C. Ong, J. M. Buhmann, andK. E. Stephan, “Mixed-effects inference on classification performance in hierarchical datasets,”Journal of Machine Learning Research, vol. 13, pp. 3133–3176, 2012.

[58] K. H. Brodersen, K. Wiech, E. I. Lomakina, C. shu Lin, J. M. Buhmann, U. Bingel, M. Ploner,K. E. Stephan, and I. Tracey, “Decoding the perception of pain from fmri using multivariatepattern analysis,” NeuroImage, vol. 63, no. 3, pp. 1162 – 1170, 2012.

[59] C. D. Sigg, T. Dikk, and J. M. Buhmann, “Learning dictionaries with bounded self-coherence,”IEEE Signal Process. Lett., vol. 19, no. 12, pp. 861–864, 2012.

[60] S. Meyer, T. J. Fuchs, A. K. Bosserhoff, F. Hofstadter, A. Pauer, V. Roth, J. M. Buhmann,I. Moll, N. Anagnostou, J. M. Brandner, K. Ikenberg, H. Moch, M. Landthaler, T. Vogt,and P. J. Wild, “A seven-marker signature and clinical outcome in malignant melanoma: Alarge-scale tissue-microarray study with two independent patient cohorts,” PLoS ONE, vol. 7,p. e38222, 06 2012.

[61] S. Heux, T. J. Fuchs, J. M. Buhmann, N. Zamboni, and U. Sauer, “A high-throughputmetabolomics method to predict high concentration cytotoxicity of drugs from low concen-tration profiles,” Metabolomics, vol. 8, pp. 433–443, 2012. 10.1007/s11306-011-0386-0.

[62] M. Frank, J. M. Buhmann, and D. Basin, “Role mining with probabilistic models,” ACMTransactions on Information and System Security (TISSEC), vol. 15, no. 4, p. 15, 2013.

[63] A. G. Busetto, A. Hauser, G. Krummenacher, M. Sunnaker, S. Dimopoulos, C. S. Ong,J. Stelling, and J. M. Buhmann, “Near-optimal experimental design for model selection insystems biology,” Bioinformatics, vol. 29, no. 20, pp. 2625–2632, 2013.

[64] T. Kaser, G.-M. Baschera, A. G. Busetto, S. Klingler, B. Solenthaler, J. M. Buhmann, andM. H. Gross, “Towards a framework for modelling engagement dynamics in multiple learningdomains,” I. J. Artificial Intelligence in Education, vol. 22, no. 1-2, pp. 59–83, 2013.

[65] K. H. Brodersen, J. Daunizeau, C. Mathys, J. R. Chumbley, J. M. Buhmann, and K. E.Stephan, “Variational bayesian mixed-effects inference for classification studies,” NeuroImage,vol. 76, no. 0, pp. 345 – 361, 2013.

[66] D. Mahapatra, P. J. Schuffler, J. A. Tielbeek, J. Makanyanga, S. A. Taylor, F. M. Vos, and J. M.Buhmann, “Automatic detection and segmentation of Crohn’s disease tissues from abdominalmri,” IEEE Transaction on Medical Imaging, vol. 32, pp. 2332–2347, 2013.

[67] D. Mahapatra, P. J. Schuffler, J. A. Tielbeek, J. Makanyanga, S. A. Taylor, J. M. Buhmann,and F. M. Vos, “A supervised learning approach for Crohn’s disease detection using higher-order image statistics and a novel shape asymmetry measure,” Journal of Digital Imaging,vol. 25, pp. 920–931, 2013.

[68] P. J. Schuffler, T. J. Fuchs, C. S. Ong, P. J. Wild, N. J. Rupp, and J. M. Buhmann, “Tmarker:A free software toolkit for histopathological cell counting and staining estimation,” Journal ofPathology Informatics, vol. 4, no. 2, p. 2, 2013.

[69] D. Mahapatra and J. M. Buhmann, “Prostate mri segmentation using learned semantic knowl-edge and graph cuts,” IEEE Trans. Biomed. Engineering, vol. 61, no. 3, pp. 756–764, 2014.

5

[70] K. H. Brodersen, L. Deserno, F. Schlagenhauf, Z. Lin, W. D. Penny, J. M. Buhmann, and K. E.Stephan, “Dissecting psychiatric spectrum disorders by generative embedding,” NeuroImage:Clinical, vol. 4, no. 0, pp. 98–111, 2014.

[71] C. Giesen, H. Wang, D. Schapiro, N. Zivanovic, A. Jacobs, B. Hattendorf, P. Schuffler,D. Grolimund, J. Buhmann, S. Brandt, Z. Varga, P. Wild, D. Gunther, and B. Bodenmiller,“Highly multiplexed imaging of tumor tissues with subcellular resolution by mass cytometry,”Nature Methods, vol. 11, no. 4, pp. 417–422, 2014.

[72] P. J. Schuffler, D. Schapiro, C. Giesen, H. Wang, B. Bodenmiller, and B. J. J. M. Buh-mann, “Automatic single cell segmentation on highly multiplexed tissue images.,” CytometryA, vol. 87, pp. 936–942, 10 2015.

[73] A. P. Streich and J. M. Buhmann, “Asymptotic analysis of estimators on multi-label data,”Machine Learning, vol. 99, no. 3, pp. 373–409, 2015.

[74] E. I. Lomakina, S. Paliwal, A. O. Diaconescu, K. H. Brodersen, E. A. Aponte, J. M. Buh-mann, and K. E. Stephan, “Inversion of hierarchical bayesian models using gaussian processes,”NeuroImage, vol. 118, pp. 133–145, 2015.

[75] J. Zimmermann, K. H. Brodersen, H. R. Heinimann, and J. M. Buhmann, “A model-basedapproach to predicting graduate-level performance using indicators of undergraduate-level per-formance,” Journal of Educational Data Mining, vol. 7, pp. 151–176, 2015.

[76] I. Arganda-Carreras, S. C. Turaga, D. R. Berger, D. Ciresan, A. Giusti, L. M. Gambardella,J. Schmidhuber, D. Laptev, S. Dwivedi, J. M. Buhmann, T. Liu, M. Seyedhosseini, T. Tas-dizen, L. Kamentsky, R. Burget, V. Uher, X. Tan, C. Sun, T. Pham, E. Bas, M. G. Uzunbas,A. Cardona, J. Schindelin, and H. S. Seung, “Crowdsourcing the creation of image segmenta-tion algorithms for connectomics,” Frontiers in Neuroanatomy, vol. 9, 2015.

[77] A. P. Oliveira, S. Dimopoulos, A. G. Busetto, S. Christen, R. Dechant, L. Falter,M. Haghir Chehreghani, S. Jozefczuk, C. Ludwig, F. Rudroff, J. C. Schulz, A. Gonzlez,A. Soulard, D. Stracka, R. Aebersold, J. M. Buhmann, M. N. Hall, M. Peter, U. Sauer,and J. Stelling, “Inferring causal metabolic signals that regulate the dynamic torc1-dependenttranscriptome,” Molecular Systems Biology, vol. 11, no. 4, pp. 802–n/a, 2015. 802.

[78] D. Mahapatra, F. M. Vos, and J. M. Buhmann, “Active learning based segmentation of crohnsdisease from abdominal MRI,” Computer Methods and Programs in Biomedicine, vol. 128,pp. 75–85, 2016.

[79] D. Mahapatra and J. M. Buhmann, “Visual saliency-based active learning for prostate mag-netic resonance imaging segmentation,” Journal of Medical Imaging, vol. 3, no. 1, p. 014003,2016.

[80] J. Burgstaller, P. Schuffler, J. Buhmann, G. Andreisek, S. Winklhofer, F. D. Grande, M. Mat-tle, F. Brunner, G. Karakoumis, J. Steurer, and U. Held, “Is there an association between painand magnetic resonance imaging parameters in patients with lumbar spinal stenosis?,” Spine,vol. 41, no. 17, pp. 1053–1062, 2016.

[81] Q. Zhong, J. H. Ruschoff, T. Guo, M. Gabrani, P. J. Schuffler, M. Rechsteiner, Y. Liu, T. J.Fuchs, N. J. Rupp, C. Fankhauser, J. M. Buhmann, S. Perner, C. Poyet, M. Blattner, D. Sol-dini, H. Moch, M. A. Rubin, A. Noske, J. Ruschoff, M. C. Haffner, W. Jochum, and P. J.

6

Wild, “Image-based computational quantification and visualization of genetic alterations andtumour heterogeneity,” Scientific Reports, vol. 24146, 2016.

[82] G. Krummenacher, C. S. Ong, S. Koller, S. Kobayashi, and J. M. Buhmann, “Wheel defectdetection with machine learning,” IEEE Transactions on Intelligent Transportation Systems,vol. PP, no. 99, pp. 1–12, 2017.

[83] J. G. Zilly, J. M. Buhmann, and D. Mahapatra, “Glaucoma detection using entropy samplingand ensemble learning for automatic optic cup and disc segmentation,” Comp. Med. Imag. andGraph., vol. 55, pp. 28–41, 2017.

[84] S. Frassle, E. I. Lomakina, A. Razi, K. J. Friston, J. M. Buhmann, and K. E. Stephan,“Regression DCM for fMRI,” NeuroImage, vol. 155, pp. 406–421, 2017.

[85] J. Buhmann, A. Gronskiy, M. Mihalak, T. Proger, R. Sramek, and P. Widmayer, “Robustoptimization in the presence of uncertainty: A generic approach,” Journal of Computer andSystem Sciences, vol. 94, pp. 135–166, 2018.

7

Articles in Peer Reviewed Conference Proceedings and BookChapters (2011-2016)

[1] J. M. Buhmann, “Mustererkennung in selbstorganisierenden, neuronalen Netzwerken,” Mas-ter’s thesis, Physik-Department, Technische Universitat Munchen, D-8000 Munchen, Arcis-straße, Fed. Rep. Germany, 1985.

[2] J. M. Buhmann, H. Ritter, and K. Schulten, “Neurokybernetik und kunstliche Intelligenz,”Computerwoche, vol. 44, pp. 66–71, 1985.

[3] J. M. Buhmann and K. Schulten, “A physiological neural network as an autoassociativememory,” in Disordered Systems and Biological Organization (E. Bienenstock, F. Fogelman-Soulie, and G. Weisbuch, eds.), (Heidelberg, Berlin, New York), pp. 273–279, Springer, 1986.

[4] J. M. Buhmann and K. Schulten, “Influence of noise on the behaviour of an autoassociativeneural network,” in Neural Networks for Computing (J. S. Denker, ed.), pp. 71–76, AmericanInstitute of Physics Publication, 1986.

[5] J. M. Buhmann, R. Divko, H. Ritter, and K. Schulten, “Physicists explore human and artificialintelligence,” in Proceedings of the International Symposium on “Structure and Dynamics inBiomolecules” (E. Clementi and S. Chin, eds.), pp. 301–327, Plenum Publishing Corporation,1986.

[6] J. M. Buhmann, R. Divko, H. Ritter, and K. Schulten, “Physik und Gehirn – Wie dynamis-che Modelle von Nervennetzen naturliche Intelligenz erklaren,” MC Mikrocompterzeitschrift,vol. 9, pp. 108–120, 1987.

[7] J. M. Buhmann and K. Schulten, “Storing sequences of biased patterns in neural networkswith stochastic dynamics,” in Neural Computers (R. Eckmiller and C. von der Malsburg,eds.), (Berlin, Heidelberg, New York), pp. 231–242, Springer, 1988.

[8] J. M. Buhmann, Neuronale Netzwerke als assoziative Speicher und als Systeme zur Mus-tererkennung. PhD thesis, Physik-Department, Technische Universitat Munchen, D-8000Munchen, Arcisstraße, Fed. Rep. Germany, 1988.

[9] J. M. Buhmann and K. Schulten, “Invariant pattern recognition by means of fast synapticplasticity,” in Proceedings of IEEE ICNN–88 (R. Eckmiller and C. von der Malsburg, eds.),pp. I125–I132, 1988.

[10] J. M. Buhmann, R. Divko, and K. Schulten, “On sparsely coded associative memories,” inNeural Networks from Models to Applications (L. Personnaz and G. Dreyfus, eds.), pp. 360–371, E.S.P.C.I., Paris, 1988.

[11] J. M. Buhmann, J. Lange, and C. von der Malsburg, “Distortion invariant object recogni-tion by matching hierarchically labeled graphs,” in International Joint Conference on NeuralNetwork (IJCNN’89), Washington, pp. I 155–159, IEEE, 1989.

[12] J. M. Buhmann, M. Lades, and C. von der Malsburg, “Size and distortion invariant objectrecognition by hierarchical graph matching,” in International Joint Conference on NeuralNetworks(IJCNN’90), SanDiego, pp. II 411–416, IEEE, 1990.

4

[13] J. M. Buhmann and C. von der Malsburg, “Sensory segmentation by neural oscillators,”in International Joint Conference on Neural Networks (IJCNN’91), Seattle, pp. II 603–607,IEEE, 1991.

[14] D. Wang, J. M. Buhmann, and C. von der Malsburg, “Pattern segmentation in associativememory,” in Olfaction: A Model System for Computational Neuroscience (J. L. Davis andH. B. Eichenbaum, eds.), pp. 213–224, MIT Press,, 1991.

[15] J. M. Buhmann, J. Lange, C. von der Malsburg, J. C. Vorbruggen, and R. P. Wurtz, “Objectrecognition with gabor functions in the dynamic link architecture,” in Neural Networks forSignal Processing (B. Kosko, ed.), (Englewood Cliffs, NJ 07632), pp. 121–159, Prentice Hall,1992.

[16] J. M. Buhmann and H. Kuhnel, “Complexity optimized vector quantization: A neural networkapproach,” in Data Compression Conference (J. Storer, ed.), (Los Alamitos, CA, USA),pp. 12–21, IEEE Computer Society Press, 1992.

[17] J. M. Buhmann and H. Kuhnel, “Unsupervised and supervised data clustering with compet-itive neural networks,” in International Joint Conference on Neural Networks (IJCNN’92),Baltimore, pp. IV–796 – IV–801, IEEE, 1992.

[18] M. Lades, J. M. Buhmann, and F. Eeckman, “Distortion invariant object recognition underdrastically varying lighting conditions,” in New Computing Techniques in Physics ResearchIII, pp. 339–346, World Scientific, 1994.

[19] J. M. Buhmann, M. Lades, and F. Eeckman, “Illumination-invariant face recognition with acontrast sensitive silicon retina,” in Advances in Neural Information Processing Systems 6,pp. 769–776, Morgan Kaufmann Publishers, 1994.

[20] J. M. Buhmann and T. Hofmann, “Central and pairwise data clustering by competitive neuralnetworks,” in Advances in Neural Information Processing Systems 6, pp. 104–111, MorganKaufmann Publishers, 1994.

[21] J. M. Buhmann and T. Hofmann, “A maximum entropy approach to pairwise data clustering,”in Proceedings of the International Conference on Pattern Recognition, Hebrew University,Jerusalem, vol. II, (Los Alamitos, CA, USA), pp. 207–212, IEEE Computer Society Press,1994.

[22] T. Hofmann and J. M. Buhmann, “Multidimensional scaling and data clustering,” in Advancesin Neural Information Processing Systems 7, pp. 459–466, MIT Press, 1995.

[23] T. Hofmann and J. M. Buhmann, “Hierarchical pairwise data clustering by mean–field an-nealing,” in Proceedings of ICANN’95, NEURONIMES’95, vol. II, pp. 197–202, EC2 & Cie,1995.

[24] J. M. Buhmann, “Oscillatory associative memories,” in Handbook of Brain Theory and NeuralNetworks (M. Arbib, ed.), pp. 691–694, Bradfort Books/MIT Press, 1995.

[25] J. M. Buhmann, “Data clustering and learning,” in Handbook of Brain Theory and NeuralNetworks (M. Arbib, ed.), pp. 278–282, Bradfort Books/MIT Press, 1995.

5

[26] T. Hofmann and J. M. Buhmann, “An annealed “neural gas” network for robust vectorquantization,” in Proceedings of ICANN’96, (Berlin, Heidelberg, New York), pp. 151–156,Springer, 1996.

[27] T. Hofmann, J. Puzicha, and J. M. Buhmann, “Unsupervised segmentation of textured im-ages by pairwise data clustering,” in Proceedings of the International Conference on ImageProcessing 1996, Lausanne, (Los Alamitos, CA, USA), pp. III 137–140, IEEE ComputerSociety Press, 1996.

[28] T. Frohlinghaus and J. M. Buhmann, “Regularizing phase-based stereo,” in Proceedings ofthe International Conference on Pattern Recognition 1996, Wien, (Los Alamitos, CA, USA),pp. A 451–456, IEEE Computer Society Press, 1996.

[29] T. Hofmann and J. M. Buhmann, “Infering hierarchical clustering structures by deterministicannealing,” in Proceedings of the Knowledge Discovery and Data Mining Conference 1996,Portland, (Redwood City, CA, USA), pp. 363–366, AAAI Press, 1996.

[30] T. Frohlinghaus and J. M. Buhmann, “Real-time phase-based stereo for a mobile robot,” inProceedings of the 1st Euromicro Workshop on Advanced Mobile Robots, pp. 178–185, IEEEComputer Society Press, 1996.

[31] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Unsupervised texture segmentation on thebasis of scale space features,” in Proceedings of the Summer School on Scale Space Theory inComputer Vision, (Copenhagen, Danmark), University of Copenhagen, 1996.

[32] H. Klock and J. M. Buhmann, “Multidimensional scaling by deterministic annealing,”in Proceedings EMMCVPR’97 (M. Pellilo and E. R.Hancock, eds.), vol. 1223 of Lec-ture Notes In Computer Science, pp. 245–260, Springer Verlag, 1997. Energy Min-imization Methods in Computer Vision and Pattern Recognition International Work-shop EMMCVPR’97, Venice, Italy, May 21-23, 1997, Proceedings Series: Lecture Notesin Computer Science ¡http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-149-69-1180994-0,00.html¿, Vol. 1223 Pelillo, Marcello; Hancock, Edwin R. (Eds.) 1997, XII, 549pp., Softcover ISBN: 3-540-62909-2.

[33] T. Hofmann, J. Puzicha, and J. M. Buhmann, “Deterministic annealing for unsupervisedtexture segmentation,” in Proceedings EMMCVPR’97 (M.Pellilo and E.R.Hancock, eds.),Lecture Notes In Computer Science, pp. 213–228, Springer Verlag, 1997.

[34] J. M. Buhmann and T. Hofmann, “Robust vector quantization by competitive learning,” inProceedings ICASSP’97, Munich, pp. I:139–142, IEEE Computer Society, 1997.

[35] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Non–parametric similarity measures forunsupervised texture segmentation and image retrieval,” in Proceedings CVPR’97, PuertoRico, pp. 267–272, IEEE Computer Society, 1997.

[36] T. Hofmann, J. Puzicha, and J. M. Buhmann, “An optimization approach to unsupervisedhierarchical texture segmentation,” in Proceedings ICIP-97, Santa Barbara, pp. III 213–216,IEEE Computer Society, 1997.

[37] H. Klock, A. Polzer, and J. M. Buhmann, “Video coding by region-based motion compensa-tion and spatio-temporal wavelet transform,” in Proceedings ICIP-97, Santa Barbara, pp. III436–439, IEEE Computer Society, 1997.

6

[38] H. Klock, A. Polzer, and J. M. Buhmann, “Region-based motion compensated 3d-wavelettransform coding of video,” in Proceedings ICIP-97, Santa Barbara, pp. II 776–779, IEEEComputer Society, 1997.

[39] J. M. Buhmann, “Stochastic algorithms for exploratory data analysis: Data clustering anddata visualization,” in Learning in Graphical Models (M. Jordan, ed.), pp. 405–419, KluwerAcademic Publisher, 1998.

[40] T. Hofmann and J. M. Buhmann, “Active data clustering,” in Advances in Neural InformationProcessing Systems 10 (M. I. Jordan, M. J. Kearns, and S. A. Solla, eds.), pp. 528–534, MITPress, 1998.

[41] M. Held and J. M. Buhmann, “Unsupervised on-line learning of decision trees for hierarchicaldata analysis,” in Advances in Neural Information Processing Systems 10 (M. I. Jordan, M. J.Kearns, and S. A. Solla, eds.), pp. 514–520, MIT Press, 1998.

[42] J. Puzicha and J. M. Buhmann, “Multiscale annealing for real–time unsupervised texture seg-mentation,” in Proceedings of the International Conference on Computer Vision (ICCV’98),pp. 267–273, 1998.

[43] J. M. Buhmann and N. Tishby, “A theory of unsupervised learning,” in Neural Networks andGeneralization (C. Bishop, ed.), pp. 57–68, Kluwer Academic Publisher, 1998.

[44] J. M. Buhmann, “Empirical risk approximation: An induction principle for unsupervisedlearning,” Tech. Rep. IAI-TR-98-3, Department of Computer Science III / University ofBonn, 1998.

[45] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Deterministic annealing: Fast physical heuris-tics for real time optimization of large systems.,” in Proceedings of the 15th IMACS WorldCongress on Scientific Computation, Modelling and Applied Mathematics, 1997.

[46] J. Ketterer, J. Puzicha, M. Held, M. Fischer, J. M. Buhmann, and D. Fellner, “On spatialquantization of color images,” in Proceedings of the 5th European Conference on ComputerVision, Freiburg, Germany, June 1998 (H. Burkhardt and B. Neumann, eds.), vol. 1406 ofLecture Notes in Computer Science, pp. 563–577, Springer Verlag, 1998.

[47] J. M. Buhmann and M. Held, “Unsupervised learning without overfitting: Empirical riskapproximation as an induction principle for reliable clustering,” in Proceedings of the In-ternational Conference on Advances in Pattern Recognition 1998, Plymouth (S. Singh, ed.),pp. 167–176, Springer Verlag, 1998.

[48] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Discrete mixture models for unsupervisedimage segmentation,” in Mustererkennung 1998, pp. 135–142, Springer Verlag, 1998.

[49] J. Puzicha, T. Hofmann, and J. M. Buhmann, “Histogram clustering for unsupervised imagesegmentation,” in Proceedings CVPR’99, Colorado Springs, pp. 602–608, IEEE ComputerSociety, 1999.

[50] L. Hermes, D. Frieauff, J. Puzicha, and J. M. Buhmann, “Support vector machines for landusage classification in Landsat TM imagery,” in Proceedings of the IEEE International Geo-science and Remote Sensing Symposium, IGARSS’99, 1999.

7

[51] J. Puzicha, J. Rubner, J. M. Buhmann, and C. Tomasi, “Empirical evaluation of dissimilaritymeasures for color and texture,” in Proceedings of the International Conference on ComputerVision (ICCV’99, Corfu), pp. 1165–1168, IEEE Computer Society, 1999.

[52] W. Forstner, J. M. Buhmann, A. Faber, and P. Faber, Mustererkennung 1999. Berlin Hei-delberg New York: Springer Verlag, 1999.

[53] J. M. Buhmann and J. Puzicha, “Unsupervised learning for robust texture segmentation,”in Performance Characterization and Evaluation of Computer Vision Algorithms (R. Klette,S. Stiehl, and M. Viergever, eds.), pp. 211–225, Kluwer Academic Publishers, 2000.

[54] J. M. Buhmann and M. Held, “Model selection in clustering by uniform convergence bounds,”in Advances in Neural Information Processing System 12 (S. A. Solla, T. Lean, and K.-R.Muller, eds.), pp. 216–222, MIT Press, 2000.

[55] T. Zoller and J. M. Buhmann, “Active learning for hierarchical pairwise data clustering,” inProceedings of the International Conference on Pattern Recognition (ICPR’2000), Barcelona,(Los Alamitos, CA, USA), pp. Vol 2, 186–189, IEEE Computer Society Press, 2000.

[56] L. Hermes and J. M. Buhmann, “Feature selection for support vector machines,” in Proceed-ings of the International Conference on Pattern Recognition (ICPR’2000), Barcelona, (LosAlamitos, CA, USA), pp. Vol 2, 716–719, IEEE Computer Society Press, 2000.

[57] S. Will, L. Hermes, J. Puzicha, and J. M. Buhmann, “Learning optimal texture edge detec-tors,” in Proceedings of the International Conference on Image Processing (ICIP 2000), (LosAlamitos, CA, USA), pp. 877–880, IEEE Computer Society Press, 2000.

[58] B. Stenger, N. Paragios, V. Ramesh, F. Coetzee, and J. M. Buhmann, “Topology free hiddenmarkov models: Application to background modeling,” in Proceedings of the InternationalConference on Computer Vision (ICCV 2001), (Los Alamitos, CA, USA), pp. I: 294–301,IEEE Computer Society Press, 2001.

[59] J. M. Buhmann, “Clustering principles and empirical risk approximation,” in Proceedings ofthe International Conference on Applied Stochastic Models and Data Analysis (ASMDA’01)(G. Govaert and N. Limnios, eds.), pp. 14–20, 2001.

[60] Z. Marx, I. Dagan, and J. M. Buhmann, “Coupled clustering: a method for detecting struc-tural correspondence,” in Proceedings of the 18th International Conference on Machine Learn-ing, pp. 353–360, Morgan Kaufmann, San Francisco, CA, 2001.

[61] B. Fischer, T. Zoller, and J. M. Buhmann, “Path based pairwise data clustering with appli-cation to texture segmentation,” in Proceedings EMMCVPR’01 (M. Figueiredo, J. Zerubia,and A. K. Jain, eds.), Lecture Notes In Computer Science 2134, pp. 235–250, Springer Verlag,2001.

[62] J. M. Buhmann and J. Puzicha, “Annealing: Fast heuristics for large scale non–linear opti-mization,” in Online Optimization of Large Scale Systems (M. Grotschel, S. O. Krumke, andJ. Rambau, eds.), pp. 740–778, Springer Verlag, 2001.

[63] J. M. Buhmann and J. Puzicha, “Multiscale annealing and robustness: Fast heuristics for largescale non–linear optimization,” in Online Optimization of Large Scale Systems (M. Grotschel,S. O. Krumke, and J. Rambau, eds.), pp. 779–802, Springer Verlag, 2001.

8

[64] J. M. Buhmann and M. Held, “On the optimal number of clusters in histogram clustering,”in Advances in Pattern Recognition, Springer Verlag, 2001.

[65] L. Hermes and J. M. Buhmann, “Contextual classification by entropy-based polygonization,”in Proceedings of the International Conference on Computer Vision and Pattern Recognition(CVPR’01), vol. II, pp. 442–447, IEEE Computer Society, 2001.

[66] M. Suing, L. Hermes, and J. M. Buhmann, “A new contour-based approach to object recog-nition for assembly line robots,” in Pattern Recognition (B. Radig and S. Florczyk, eds.),Lecture Notes in Computer Science 2191, pp. 329–336, Springer Verlag, 2001.

[67] J. M. Buhmann, “Data clustering and learning,” in Handbook of Brain Theory and NeuralNetworks (2nd edition) (M. Arbib, ed.), pp. 308–312, MIT Press, 2002.

[68] J. M. Buhmann, “Model validation,” in Handbook of Brain Theory and Neural Networks (2nd

edition) (M. Arbib, ed.), pp. 666–668, MIT Press, 2002.

[69] M. L. Braun and J. M. Buhmann, “The noisy Euclidian Traveling Salesman Problemand learning,” in Advances in Neural Information Processing System 14 (T. G. Dietterich,S. Becker, and Z. Ghahramani, eds.), pp. 351–358, MIT Press, 2002.

[70] V. Roth, T. Lange, M. L. Braun, and J. M. Buhmann, “A resampling based approach tocluster validation,” in COMPSTAT 2002, Proceedings in Computational Statistics (W. Hardleand B. Ronz, eds.), pp. 123–128, Physica-Verlag, Heidelberg, 2002.

[71] L. Hermes, T. Zoller, and J. M. Buhmann, “Parametric distributional clustering for colorimage segmentation,” in Proceedings of the 7th European Conference on Computer Vision,Copenhagen, June 2002 (A. Heyden, G. Sparr, M. Nielson, and P. Johansen, eds.), vol. III ofLecture Notes in Computer Science 2352, pp. 577–591, Springer Verlag, 2002.

[72] T. Zoller, L. Hermes, and J. M. Buhmann, “Combined color and texture segmentation byparametric distributional clustering,” in Proceedings of the International Conference on Pat-tern Recognition (ICPR’2002), Quebec City (R. Kasturi, D. Laurendeau, and C. Suen, eds.),vol. II, pp. 627–630, IEEE Computer Society, 2002.

[73] T. Zoller and J. M. Buhmann, “Self-organized clustering of mixture models for combinedcolor and texture segmentation,” in Texture 2002 (M. Chantler, ed.), pp. 163–167, Heriot-Watt University, 2002.

[74] V. Roth, T. Lange, M. L. Braun, and J. M. Buhmann, “Stability-based model order selectionin clustering with applications to gene expression data,” in Proceedings of the InternationalConference on Artificial Neural Networks, July 2002, Madrid (J. R. Dorronsoro, ed.), vol. 2415of Lecture Notes in Computer Science, pp. 607–612, Springer Verlag, 2002.

[75] B. Fischer and J. M. Buhmann, “Data resampling for path based clustering,” in PatternRecognition (L. V. Gool, ed.), vol. 2449 of Lecture Notes in Computer Science, pp. 206–214,Springer Verlag, 2002.

[76] V. Roth, J. Laub, K.-R. Muller, and J. M. Buhmann, “Going metric: denoising pairwisedata,” in Advances in Neural Information Processing System 15 (S. Becker, S. Thrun, andK. Obermayer, eds.), pp. 841–848, MIT Press, 2003.

9

[77] T. Lange, M. L. Braun, V. Roth, and J. M. Buhmann, “Stability-based model selection,” inAdvances in Neural Information Processing System 15 (S. Becker, S. Thrun, and K. Ober-mayer, eds.), pp. 633–640, MIT Press, 2003.

[78] L. Hermes and J. M. Buhmann, “Semi-supervised image segmentation by parametric dis-tributional clustering,” in Energy Minimization Methods in Computer Vision and PatternRecognition (A. Rangarajan, M. Figueiredo, and J. Zerubia, eds.), Lecture Notes In Com-puter Science 2683, pp. 229–245, Springer Verlag, 2003.

[79] B. Ommer and J. M. Buhmann, “A compositionality architecture for perceptual featuregrouping,” in Energy Minimization Methods in Computer Vision and Pattern Recognition(A. Rangarajan, M. Figueiredo, and J. Zerubia, eds.), Lecture Notes In Computer Science2683, pp. 275–290, Springer Verlag, 2003.

[80] W. Chen and J. M. Buhmann, “A new distance measure for probabilistic shape modeling,”in Pattern Recognition (B. Michaelis, ed.), vol. 2781 of Lecture Notes in Computer Science,pp. 507–514, Springer Verlag, 2003.

[81] J. M. B. Bernd Fischer, Volker Roth, “Clustering with the connectivity kernel,” in Advancesin Neural Information Processing System 16 (S. Thrun, L. K. Saul, and B. Scholkopf, eds.),pp. 89–96, MIT Press, 2004.

[82] T. Zoller and J. M. Buhmann, “Shape constrained image segmentation by parametric dis-tributional clustering,” in Proceedings of the International Conference on Computer Visionand PatNIPS03,tern Recognition (CVPR’04), vol. I, pp. I386–I393, IEEE Computer Society,2004.

[83] A. K. Jain, M. H. Law, A. Topchy, and J. M. Buhmann, “Landscape of clustering algorithms,”in Proceedings of the International Conference on Pattern Recognition (ICPR’2004), Cam-bridge UK) (J. Kittler, M. Petrou, and M. Nixon, eds.), vol. I, pp. 260–263, IEEE ComputerSociety, 2004.

[84] S. C. Stilkerich and J. M. Buhmann, “Massively parallel architecture for an unsupervised seg-mentation model,” in Proc. IEEE ICSES, International Conference on Signals and ElectronicSystems, pp. 361–364, 2004. conference in Poznan, Poland.

[85] B. Fischer, V. Roth, J. M. Buhmann, J. Grossmann, S. Baginsky, W. Gruissem, F. Roos,and P. Widmayer, “A hidden markov model for de novo peptide sequencing,” in Advances inNeural Information Processing System 17 (L. Saul, Y. Weiss, and L. Bottou, eds.), pp. 457–464, MIT Press, 2005.

[86] T. Lange, M. H. C. Law, A. K. Jain, and J. M. Buhmann, “Learning with constrainedand unlabelled data,” in 2005 IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR 2005), 20-26 June 2005, San Diego, CA, USA, pp. 731–738,IEEE Computer Society, 2005.

[87] T. Lange and J. M. Buhmann, “Combining partitions by probabilistic label aggregation.,”in KDD, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowl-edge Discovery and Data Mining, Chicago, Illinois, USA, August 21-24, 2005 (R. Grossman,R. Bayardo, and K. P. Bennett, eds.), pp. 147–156, ACM, 2005.

10

[88] P. Orbanz and J. M. Buhmann, “SAR images as mixtures of Gaussian mixtures,” in Pro-ceedings of the International Conference on Image Processing (ICIP’05), vol. 2, pp. 209–212,IEEE Computer Society, 2005.

[89] B. Ommer and J. M. Buhmann, “Object categorization by compositional graphical models,”in Energy Minimization Methods in Computer Vision and Pattern Recognition (A. Rangara-jan, B. Vemuri, and A. L. Yuille, eds.), Lecture Notes In Computer Science 3757, pp. 235–250,Springer Verlag, 2005.

[90] B. Ommer and J. M. Buhmann, “Learning compositional categorization models,” in ECCV2006 Proceedings (A. Leonardis, H. Bischof, and A. P. (Eds.), eds.), vol. III of Lecture NotesIn Computer Science 3753, pp. 316–329, Springer Verlag, 2006.

[91] P. Orbanz and J. M. Buhmann, “Smooth image segmentation by nonparametric bayesianinference,” in ECCV 2006 Proceedings (A. Leonardis, H. Bischof, and A. P. (Eds.), eds.),vol. I of Lecture Notes In Computer Science 3751, pp. 444–457, Springer Verlag, 2006.

[92] T. Lange and J. M. Buhmann, “Fusion of similarity data in clustering,” in Advances in NeuralInformation Processing Systems (NIPS) 19, (Cambridge, MA), pp. 723–730, MIT Press, 2006.

[93] B. Ommer, M. Sauter, and J. M. Buhmann, “Learning top-down grouping of compositionalhierarchies for recognition,” in CVPR (Workshop on Perceptual Organization in ComputerVision), 2006.

[94] A. Rabinovich, T. Lange, J. Buhmann, and S. Belongie, “Model order selection and cuecombination for image segmentation,” in 2006 IEEE Computer Society Conference on Com-puter Vision and Pattern Recognition (CVPR 2006), 17–22 June 2006, New York, NY, USA,pp. 1130–1137, IEEE Computer Society, 2006.

[95] H. Peter, B. Fischer, and J. M. Buhmann, “Probabilistic de novo peptide sequencing withdoubly charged ions,” in Pattern Recognition - Symposium of the DAGM 2006 (K. Franke,K.-R. Mueller, B. Nickolay, and R. Schaefer, eds.), vol. 4174 of LNCS, pp. 424–433, Springer,2006.

[96] P. Wey, B. Fischer, H. Bay, and J. M. Buhmann, “Dense stereo by triangular meshing andcross validation,” in Pattern Recognition - Symposium of the DAGM 2006 (K. Franke, K.-R.Mueller, B. Nickolay, and R. Schaefer, eds.), vol. 4174 of LNCS, pp. 708–717, Springer, 2006.

[97] M. L. Braun, T. Lange, and J. M. Buhmann, “Model selection in kernel methods based ona spectral analysis of label information,” in Pattern Recognition - 28th DAGM Symposium(K. Franke, K.-R. Mueller, B. Nickolay, and R. Schaefer, eds.), vol. 4174 of LNCS, pp. 344–353, Springer, 2006.

[98] I. Guyon, A. Alamdari, G. Dror, and J. Buhmann, “Performance prediction challenge,” inNeural Networks, 2006. IJCNN ’06. International Joint Conference on, pp. 1649–1656, IEEE,2006.

[99] M. L. Braun, J. M. Buhmann, and K. Muller, “Denoising and dimension reduction in fea-ture space,” in Advances in Neural Information Processing Systems 19, Proceedings of theTwentieth Annual Conference on Neural Information Processing Systems, Vancouver, BritishColumbia, Canada, December 4-7, 2006 (B. Scholkopf, J. C. Platt, and T. Hofmann, eds.),pp. 185–192, MIT Press, 2007.

11

[100] B. Ommer and J. M. Buhmann, “Learning the compositional nature of visual objects,” in 2007IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR2007), 18-23 June 2007, Minneapolis, Minnesota, USA, IEEE Computer Society, 2007.

[101] L. Busse, P. Orbanz, and J. M. Buhmann, “Cluster analysis of heterogeneous rank data,” inProceedings of the 24th International Conference on Machine Learning (Z. Ghahramani, ed.),pp. 113–120, International Machine Learning Society, 2007.

[102] B. Ommer and J. M. Buhmann, “Compositional object recognition, segmentation, and track-ing in video,” in Energy Minimization Methods in Computer Vision and Pattern Recognition,6th International Conference, EMMCVPR 2007, Ezhou, China, August 27-29, 2007, Pro-ceedings (A. L. Yuille, S. C. Zhu, D. Cremers, and Y. Wang, eds.), vol. 4679 of Lecture Notesin Computer Science, pp. 318–333, Springer, 2007.

[103] P. Orbanz, S. Braendle, and J. M. Buhmann, “Bayesian order-adaptive clustering for videosegmentation,” in Energy Minimization Methods in Computer Vision and Pattern Recogni-tion, 6th International Conference, EMMCVPR 2007, Ezhou, China, August 27-29, 2007,Proceedings (A. L. Yuille, S. C. Zhu, D. Cremers, and Y. Wang, eds.), vol. 4679 of LectureNotes in Computer Science, pp. 334–349, Springer, 2007.

[104] T. Lange and J. M. Buhmann, “Regularized data fusion improves image segmentation,”in Pattern Recognition - Symposium of the DAGM 2007 (F. Hamprecht, C. Schnorr, andB. Jahne, eds.), vol. 4713 of LNCS, pp. 234–243, Springer, 2007.

[105] T. Lange and J. M. Buhmann, “Kernel-based grouping of histogram data,” in Proc.ECML/PKDD-2007, vol. 4701 of LNCS, pp. 632–639, Springer, 2007.

[106] B. Fischer, V. Roth, and J. M. Buhmann, “Time-series alignment by non-negative multi-ple generalized canonical correlation analysis,” in Computational Intelligence Methods forBioinformatics and Biostatistics, vol. 4578 of LNAI, pp. 505–511, Springer, 2007.

[107] C. Sigg, B. Fischer, B. Ommer, V. Roth, and J. M. Buhmann, “Nonnegative CCA for au-diovisual source separation,” in IEEE Workshop on Machine Learning for Signal Processing,IEEE Press, 2007.

[108] V. Kaynig, B. Fischer, R. Wepf, and J. M. Buhmann, “Fully automatic registration of electronmicroscopy images with high and low resolution,” in Microscopy and Microanalysis, 2007.

[109] Y. Moh, P. Orbanz, and J. M. Buhmann, “Music preference learning with partial infor-mation,” in Proceedings of the International Conference on Acoustic, Speech, and SignalProcessing, IEEE Press, 2008.

[110] T. Lange, M. H. Law, A. K. Jain, and J. Buhmann, Constrained Clustering: Advances inAlgorithms, Theory, and Applications, ch. Clustering with Constraints: A Mean-Field Ap-proximation Perspective. Chapman & Hall, 2008.

[111] C. D. Sigg and J. M. Buhmann, “Expectation-maximization for sparse and non-negative pca,”in Proc. 25th International Conference on Machine Learning, pp. 960–967, 2008.

[112] Y. Moh and J. M. Buhmann, “Kernel expansion for online preference tracking,” in 9th Inter-national Conference on Music Information Retrieval, pp. 167–172, 2008.

12

[113] T. J. Fuchs, T. Lange, P. J. Wild, H. Moch, and J. M. Buhmann, “Weakly supervised cellnuclei detection and segmentation on tissue microarrays of renal cell carcinoma,” in PatternRecognition, vol. 5096/2008 of Lecture Notes in Computer Science, pp. 173–182, SpringerBerlin / Heidelberg, 2008.

[114] Y. Moh, W. Einhauser, and J. M. Buhmann, “Automatic detection of learnability under unre-liable and sparse user feedback,” in Pattern Recognition–DAGM 2008, pp. 173–182, Springer,2008.

[115] T. J. Fuchs, P. J. Wild, H. Moch, and J. M. Buhmann, “Computational pathology analysis oftissue microarrays predicts survival of renal clear cell carcinoma patients,” in Medical ImageComputing and Computer-Assisted Intervention - MICCAI 2008, vol. Volume 5242/2008 ofLecture Notes in Computer Science, pp. 1–8, Springer Berlin / Heidelberg, 2008.

[116] V. Kaynig, B. Fischer, and J. M. Buhmann, “Probabilistic image registration and anomalydetection by nonlinear warping,” in IEEE Conference on Computer Vision and Pattern Recog-nition (CVPR), 2008.

[117] A. Streich and J. M. Buhmann, “Classification of multi-labeled data: A generative approach,”in proceedings of the ECML-PKDD 2008 conference, vol. 5212 of Lecture Notes in ArtificialIntelligence LNAI, pp. 390–405, 2008.

[118] M. Frank, D. Basin, and J. M. Buhmann, “A class of probabilistic models for role engineering,”in 15th ACM Conference on Computer and Communications Security (CCS 2008), ACM,2008.

[119] Y. Moh and J. M. Buhmann, “Manifold regularization for semi-supervised sequential learn-ing,” in Proceedings of 34th IEEE International Conference on Acoustics, Speech and SignalProcessing, pp. 1617–1620, April 2009.

[120] A. G. Busetto and J. M. Buhmann, “Structure identification by optimized interventions.,”Journal of Machine Learning Research - Proceedings Track, vol. 5, pp. 57–64, 2009.

[121] A. G. Busetto, C. S. Ong, and J. M. Buhmann, “Optimized expected information gain fornonlinear dynamical systems,” in Proceedings of the International Conference on MachineLearning (L. Bottou and M. Littman, eds.), (Montreal), Omnipress, June 2009.

[122] P. Pletscher, C. S. Ong, and J. M. Buhmann, “Spanning tree approximations for conditionalrandom fields,” in Proceedings of the Twelfth International Conference on Artificial Intelli-gence and Statistics (AISTATS) 2009 (D. van Dyk and M. Welling, eds.), (Clearwater Beach,Florida), pp. 408–415, JMLR: W&CP 5, 2009.

[123] A. P. Streich, M. Frank, D. Basin, and J. M. Buhmann, “Multi-assignment clustering forboolean data,” in Proceedings of the 26th International Conference on Machine Learning(L. Bottou and M. Littman, eds.), (Montreal), pp. 969–976, Omnipress, June 2009.

[124] M. Frank, A. Streich, D. Basin, and J. M. Buhmann, “A probabilistic approach to hybrid rolemining,” in 16th ACM Conference on Computer and Communications Security (CCS 2009),ACM, 2009.

[125] T. J. Fuchs and J. M. Buhmann, “Inter-active learning of randomized tree ensembles for objectdetection,” in ICCV Workshop on On-line Learning for Computer Vision, 2009, IEEE, 2009.

13

[126] X. Floros, T. J. Fuchs, M. P. Rechsteiner, G. Spinas, H. Moch, and J. M. Buhmann, “Graph-based pancreatic islet segmentation for early type 2 diabetes mellitus on histopathologicaltissue,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009(G.-Z. Yang, D. Hawkes, D. Rueckert, A. Noble, and C. Taylor, eds.), vol. 5761 of LectureNotes in Computer Science, pp. 633–640, Springer Berlin / Heidelberg, 2009.

[127] T. J. Fuchs, J. Haybaeck, P. J. Wild, M. Heikenwalder, H. Moch, A. Aguzzi, and J. M.Buhmann, “Randomized tree ensembles for object detection in computational pathology,”in Proceedings of the 5th International Symposium on Visual Computing – ISVC 2009, LasVegas, Nevada, USA, vol. 5875, Part I of Lecture Notes in Computer Science, pp. 367–378,Springer Berlin / Heidelberg, 2009.

[128] A. P. Streich and J. M. Buhmann, “Ignoring co-occurring sources in learning from multi-labeled data leads to model mismatch,” in MLD09: ECML/PKDD 2009 Workshop on Learn-ing from Multi-Label Data, 2009.

[129] A. G. Busetto and J. M. Buhmann, “Stable bayesian parameter estimation for biologicaldynamical systems,” in Proceedings of the 12th IEEE International Conference on Computa-tional Science and Engineering, pp. 148–157, 2009.

[130] C. D. Sigg, T. Dikk, and J. M. Buhmann, “Speech enhancement with sparse coding in learneddictionaries,” in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE InternationalConference on, pp. 4758–4761, 2010.

[131] Y. Moh and J. M. Buhmann, “Regularized online learning of pseudometrics,” in AcousticsSpeech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pp. 1990–1993, 2010.

[132] M. Claassen, R. Aebersold, and J. Buhmann, “Proteome coverage prediction for integratedproteomics datasets,” in Research in Computational Molecular Biology (B. Berger, ed.),vol. 6044 of Lecture Notes in Computer Science, pp. 96–109, Springer Berlin / Heidelberg,2010. 10.1007/978-3-642-12683-3 7.

[133] V. Kaynig, T. Fuchs, and J. M. Buhmann, “Neuron geometry extraction by perceptual group-ing in sstem images,” in IEEE Conference on Computer Vision and Pattern Recognition(CVPR, San Francisco), 2010.

[134] A. Vezhnevets and J. M. Buhmann, “Towards weakly supervised semantic segmentation bymeans of multiple instance and multitask learning,” in IEEE Conference on Computer Visionand Pattern Recognition (CVPR, San Francisco), 2010.

[135] M. Frank, J. M. Buhmann, and D. Basin, “On the definition of role mining,” in Proceedingof the 15th ACM symposium on Access control models and technologies, SACMAT ’10, (NewYork, NY, USA), pp. 35–44, ACM, 2010.

[136] J. M. Buhmann, “Information theoretic model validation for clustering,” in InternationalSymposium on Information Theory, Austin Texas, pp. 1398 – 1402, IEEE, 2010.

[137] K. H. Brodersen, C. S. Ong, K. E. Stephan, and J. M. Buhmann, “The binormal assumptionon precision-recall curves,” in Pattern Recognition (ICPR), 2010 20th International Confer-ence on, (Los Alamitos, CA, USA), pp. 4263–4266, IEEE Computer Society, 2010.

14

[138] K. Brodersen, C. S. Ong, K. Stephan, and J. Buhmann, “The balanced accuracy and itsposterior distribution,” in Pattern Recognition (ICPR), 2010 20th International Conferenceon, pp. 3121 –3124, aug. 2010.

[139] P. Pletscher, C. S. Ong, and J. M. Buhmann, “Entropy and margin maximization for struc-tured output learning,” in Proceedings of the 20th European Conference on Machine Learning(ECML) (J. L. Balcazar, F. Bonchi, A. Gionis, and M. Sebag, eds.), vol. 6321 of Lecture Notesin Computer Science, pp. 83–98, 2010.

[140] M. Claassen, L. Reiter, M. O. Hengartner, J. M. Buhmann, and R. Aebersold, “Genericcomparison of protein inference engines,” Molecular & Cellular Proteomics, 2011.

[141] J. Buhmann, “Context sensitive information: Model validation by information theory,” inPattern Recognition (J. Martınez-Trinidad, J. Carrasco-Ochoa, C. Ben-Youssef Brants, andE. Hancock, eds.), vol. 6718 of Lecture Notes in Computer Science, pp. 12–21, Springer Berlin/ Heidelberg, 2011. 10.1007/978-3-642-21587-2 2.

[142] M. Frank and J. M. Buhmann, “Selecting the rank of svd by maximum approximation ca-pacity,” in International Symposium on Information Theory, St. Petersburg, pp. 1036 – 1040,IEEE, 2011.

[143] A. Vezhnevets and J. M. Buhmann, “Agnostic domain adaptation,” in Proceedings of the 33rdinternational conference on Pattern recognition, DAGM’11, (Berlin, Heidelberg), pp. 376–385,Springer-Verlag, 2011.

[144] L. Busse and J. Buhmann, “Model-based clustering of inhomogeneous paired comparisondata,” in Similarity-Based Pattern Recognition (M. Pelillo and E. Hancock, eds.), vol. 7005of Lecture Notes in Computer Science, pp. 207–221, Springer Berlin / Heidelberg, 2011.10.1007/978-3-642-24471-1 15.

[145] M. Frank, M. Chehreghani, and J. Buhmann, “The minimum transfer cost principle for model-order selection,” in Machine Learning and Knowledge Discovery in Databases (D. Gunopulos,T. Hofmann, D. Malerba, and M. Vazirgiannis, eds.), vol. 6911 of Lecture Notes in ComputerScience, pp. 423–438, Springer Berlin / Heidelberg, 2011. 10.1007/978-3-642-23780-5 37.

[146] A. Vezhnevets, V. Ferrari, and J. M. Buhmann, “Weakly supervised semantic segmentationwith a multi-image model,” in Computer Vision (ICCV), 2011 IEEE International Confer-ence on, pp. 643–650, 2011.

[147] G.-M. Baschera, A. Busetto, S. Klingler, J. Buhmann, and M. Gross, “Modeling engagementdynamics in spelling learning,” in Proc. of the 15th Int. Conf. on Artificial Intelligence inEducation (AIED 11), pp. 31–38, Springer Lecture Notes in Computer Science, 2011.

[148] J. M. Buhmann, M. H. Chehreghani, M. Frank, and A. P. Streich, “Information theoreticmodel selection for pattern analysis,” in ICML 2011 WOrkshop on “Unsupervised and Trans-fer Learning”, Bellevue, Washington (I. Guyon, G. Dror, V. Lemaire, G. Taylor, and D. Silver,eds.), vol. 27, (Clearwater Beach, Florida), pp. 51–65, JMLR: W&CP 5, 2012.

[149] M. H. Chehreghani, A. G. Busetto, and J. M. Buhmann, “Information theoretic model vali-dation for spectral clustering,” in AISTATS 2012, La Palma, vol. 22, pp. 495–503, 2012.

[150] A. Vezhnevets, V. Ferrari, and J. M. Buhmann, “Weakly supervised structured output learn-ing for semantic segmentation,” in CVPR 2012, Providence, pp. 845–852, 2012. (oral).

15

[151] A. Vezhnevets, J. Buhmann, and V. Ferrari, “Active learning for semantic segmentation withexpected change,” in CVPR 2012, Providence, pp. 3162–3169, 2012.

[152] D. Laptev, A. Vezhnevets, S. Dwivedi, and J. Buhmann, “Anisotropic sstem image seg-mentation using dense correspondence across sections,” in MICCAI 2012, Nice (N. Ayache,H. Delingette, P. Golland, and K. Mori, eds.), vol. 7510 of Lecture Notes in Computer Science,pp. 323–330, Springer-Verlag Berlin / Heidelberg, 2012.

[153] D. Mahapatra, P. Schueffler, J. A. W. Tielbeek, J. M. Buhmann, and F. M. Vos, “A supervisedlearning based approach to detect Crohn’s disease in abdominal mr volumes,” in AbdominalImaging, pp. 97–106, 2012.

[154] L. M. Busse, M. H. Chehreghani, and J. M. Buhmann, “The information content in sortingalgorithms,” in International Symposium on Information Theory, Cambridge, MA, pp. 2746– 2750, IEEE, 2012.

[155] D. Mahapatra, P. Schuffler, J. Tielbeek, J. M. Buhmann, and F. M. Vos, “A supervisedlearning based approach to detect Crohn’s disease in abdominal mr volumes,” in Proceedingsof the MICCAI Workshop on Computational and Clinical Applications in Abdominal Imaging,MICCAI-CCAAI, 2012 (H. Yoshida, D. Hawkes, and M. Vannier, eds.), vol. 7601 of LectureNotes in Computer Science, pp. 97–106, Springer-Verlag Berlin / Heidelberg, 2012.

[156] D. Mahapatra and J. Buhmann, “Cardiac lv and rv segmentation using mutual contextinformation,” in Machine Learning in Medical Imaging (F. Wang, D. Shen, P. Yan, andK. Suzuki, eds.), vol. 0 of Lecture Notes in Computer Science, pp. 201–209, Springer Berlin/ Heidelberg, 2012. 10.1007/978-3-642-35428-1 25.

[157] F. M. Vos, J. A. Tielbeek, R. E. Naziroglu, Z. Li, P. Schueffler, D. Mahapatra, A. Wiebel,C. Lavini, J. M. Buhmann, H.-C. Hege, J. Stoker, and L. J. van Vliet, “Computationalmodeling for assessment of ibd: to be or not to be?,” in 34th Annual International Conferenceof the IEEE EMBS, pp. 3974–3977, 2012.

[158] J. Buhmann, M. Mihalak, R. Sramek, and P. Widmayer, “Robust optimization in the presenceof uncertainty,” in Innovations in Theoretical Computer Science 2013, Berkeley, pp. 505–514,ACM, 2013.

[159] A. Busetto, M. Sunnaker, and J. Buhmann, “Computational design of informative experi-ments in systems biology,” in System Theoretic and Computational Perspectives in Systemsand Synthetic Biology (K. R. G.-B. Stan, V. Kulkarni, ed.), Springer Berlin / Heidelberg,2013.

[160] J. Buhmann, “SIMBAD: Emergence of pattern similarity,” in Similarity-Based Pattern Anal-ysis and Recognition (M. Pelillo, ed.), Advances in Vision and Pattern Recognition, pp. 45–64,Springer Berlin / Heidelberg, 2013.

[161] P. J. Schuffler, T. J. Fuchs, C. S. Ong, V. Roth, and J. M. Buhmann, “Automated analysisof tissue micro-array images on the example of renal cell carcinoma,” in Similarity-BasedPattern Analysis and Recognition (M. Pelillo, ed.), Springer Berlin / Heidelberg, 2013.

[162] P. J. Schuffler, D. Mahapatra, J. A. W. Tielbeek, F. M. Vos, J. Makanyanga, D. Pendse,C. Y. Nio, J. Stoker, S. A. Taylor, and J. M. Buhmann, “A model development pipeline forCrohn’s disease severity assessment from magnetic resonance images,” in Abdominal Imaging,pp. 1–10, 2013.

16

[163] D. Mahapatra and J. M. Buhmann, “Automatic cardiac rv segmentation using semanticinformation with graph cuts,” in ISBI, pp. 1106–1109, 2013.

[164] D. Mahapatra, A. Vezhnevets, P. J. Schuffler, J. A. W. Tielbeek, F. M. Vos, and J. M. Buh-mann, “Weakly supervised semantic segmentation of Crohn’s disease tissues from abdominalmri,” in ISBI, pp. 844–847, 2013.

[165] G. Krummenacher, C. S. Ong, and J. M. Buhmann, “Ellipsoidal multiple instance learning,”in ICML (2), pp. 73–81, 2013.

[166] D. Mahapatra, P. J. Schueffler, J. A. Tielbeek, F. M. Vos, and J. M. Buhmann, “Semi-supervised and active learning for automatic segmentation of Crohn’s disease,” in MICCAI2013 (K. Mori, I. Sakuma, Y. Sato, C. Barillot, and N. Navab, eds.), vol. 8150, Part II ofLecture Notes in Computer Science, pp. 214 – 221, Springer Berlin / Heidelberg, 2013.

[167] Localizing and segmenting Crohn’s disease affected regions in abdominal MRI using novelcontext features, vol. 8669, 2013.

[168] B. McWilliams, D. Balduzzi, and J. M. Buhmann, “Correlated random features for fast semi-supervised learning,” in Advances in Neural Information Processing Systems 26 (M. Wellingand Z. Ghahramani, eds.), pp. 440–448, 2014.

[169] D. Laptev, A. Veznevets, and J. M. Buhmann, “Superslicing frame restoration for anisotropicsstem,” in IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014, April 29- May 2, 2014, Beijing, Chin, Beijing, China, pp. 1198–1201, 2014.

[170] D. Laptev and J. M. Buhmann, “Superslicing frame restoration for anisotropic sstem andvideo data,” in Proceedings of the Neural Connectomics Workshop at ECML 2014, Porto,Portugal, September 7, 2015., pp. 91–101, 2014.

[171] D. Mahapatra, P. J. Schuffler, J. A. W. Tielbeek, J. Makanyanga, J. Stoker, S. A. Taylor,F. M. Vos, and J. M. Buhmann, “Active learning based segmentation of crohn’s disease usingprinciples of visual saliency,” in IEEE 11th International Symposium on Biomedical Imaging,ISBI 2014, April 29 - May 2, 2014, Beijing, Chin, Beijing, China, pp. 226–229, 2014.

[172] D. Laptev and J. M. Buhmann, “Convolutional decision trees for feature learning and segmen-tation,” in Pattern Recognition - 36th German Conference, GCPR 2014, Munster, Germany,September 2-5, 2014, Proceedings, pp. 95–106, 2014.

[173] G. Zhou, S. Geman, and J. Buhmann, “Sparse feature selection by information theory,” inInformation Theory (ISIT), 2014 IEEE International Symposium on, pp. 926–930, June 2014.

[174] A. Gronskiy and J. Buhmann, “How informative are minimum spanning tree algorithms?,”in Information Theory (ISIT), 2014 IEEE International Symposium on, pp. 2277–2281, June2014.

[175] J. M. Buhmann, A. Gronskiy, and W. Szpankowski, “Free energy rates for a class of very noisyoptimization problems,” in Proceedings of the 25th International Conference on Probabilistic,Combinatorial and Asymptotic Methods for the Analysis of Algorithms (Mireille Bousquet-Melou and Michele Soria, ed.), vol. BA of DMTCS-HAL Proceedings Series, 2014.

17

[176] J. G. Zilly, J. M. Buhmann, and D. Mahapatra, “Boosting convolutional filters with entropysampling for optic cup and disc image segmentation from fundus images,” in Machine Learn-ing in Medical Imaging - 6th International Workshop, MLMI 2015, Held in Conjunction withMICCAI 2015, Munich, Germany, October 5, 2015, Proceedings, pp. 136–143, 2015.

[177] D. Mahapatra and J. M. Buhmann, “Visual saliency based active learning for prostate MRIsegmentation,” in Machine Learning in Medical Imaging - 6th International Workshop, MLMI2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Proceed-ings, pp. 9–16, 2015.

[178] D. Mahapatra, Z. Li, F. Vos, and J. M. Buhmann, “Joint segmentation and groupwise reg-istration of cardiac DCE MRI using sparse data representations,” in 12th IEEE Interna-tional Symposium on Biomedical Imaging, ISBI 2015, Brooklyn, NY, USA, April 16-19,2015, pp. 1312–1315, 2015.

[179] Y. Bian, A. Gronskiy, and J. M. Buhmann, “Greedy maxcut algorithms and their informationcontent,” in 2015 IEEE Information Theory Workshop, ITW 2015, Jerusalem, Israel, April26 - May 1, 2015, pp. 1–5, 2015.

[180] D. Mahapatra, P. J. Schuffler, F. Vos, and J. M. Buhmann, “Crohn’s disease segmentationfrom MRI using learned image priors,” in 12th IEEE International Symposium on BiomedicalImaging, ISBI 2015, Brooklyn, NY, USA, April 16-19, 2015, pp. 625–628, 2015.

[181] D. Mahapatra and J. M. Buhmann, “A field of experts model for optic cup and disc segmen-tation from retinal fundus images,” in 12th IEEE International Symposium on BiomedicalImaging, ISBI 2015, Brooklyn, NY, USA, April 16-19, 2015, pp. 218–221, 2015.

[182] D. Laptev and J. M. Buhmann, “Transformation-invariant convolutional jungles,” in IEEEConference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA,June 7-12, 2015, pp. 3043–3051, 2015.

[183] D. Balduzzi, H. Vanchinathan, and J. M. Buhmann, “Kickback cuts backprop’s red-tape:Biologically plausible credit assignment in neural networks,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA.,pp. 485–491, 2015.

[184] D. Laptev, N. Savinov, J. M. Buhmann, and M. Pollefeys, “TI-POOLING: Transformation-invariant pooling for feature learning in convolutional neural networks,” in Proceedings of the2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), IEEEConference on Computer Vision and Pattern Recognition. Proceedings, (Piscataway, NJ),pp. 289–297, IEEE, 2016.

[185] Y. Bian, A. Gronskiy, and J. M. Buhmann, “Information-theoretic analysis of maxcut algo-rithms,” in 2016 Information Theory and Applications Workshop, ITA 2016, La Jolla, CA,USA, January 31 - February 5, 2016, pp. 1–5, 2016.

[186] G. Krummenacher, B. McWilliams, Y. Kilcher, J. M. Buhmann, and N. Meinshausen, “Scal-able adaptive stochastic optimization using random projections,” in Advances in Neural Infor-mation Processing Systems 29: Annual Conference on Neural Information Processing Systems2016, December 5-10, 2016, Barcelona, Spain, pp. 1750–1758, 2016.

18

[187] J. M. Buhmann, J. Dumazert, A. Gronskiy, and W. Szpankowski, “Phase transitions in pa-rameter rich optimization problems,” in Proceedings of the Fourteenth Workshop on AnalyticAlgorithmics and Combinatorics, ANALCO 2017, Barcelona, Spain, Hotel Porta Fira, Jan-uary 16-17, 2017. (C. Martınez and M. D. Ward, eds.), pp. 148–155, SIAM, 2017.

[188] A. A. Bian, J. M. Buhmann, A. Krause, and S. Tschiatschek, “Guarantees for greedy maxi-mization of non-submodular functions with applications,” in Proceedings of the 34th Interna-tional Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August2017, pp. 498–507, 2017.

[189] A. A. Bian, B. Mirzasoleiman, J. M. Buhmann, and A. Krause, “Guaranteed non-convexoptimization: Submodular maximization over continuous domains,” in Proceedings of the20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22April 2017, Fort Lauderdale, FL, USA, pp. 111–120, 2017.

[190] G. Abbati, S. Bauer, S. Winklhofer, P. J. Schuffler, U. Held, J. M. Burgstaller, J. Steurer, andJ. M. Buhmann, “Mri-based surgical planning for lumbar spinal stenosis,” in Medical ImageComputing and Computer Assisted Intervention - MICCAI 2017 - 20th International Confer-ence, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III, pp. 116–124,2017.

[191] N. S. Gorbach, A. A. Bian, B. Fischer, S. Bauer, and J. M. Buhmann, “Model selection forgaussian process regression,” in Pattern Recognition - 39th German Conference, GCPR 2017,Basel, Switzerland, September 12-15, 2017, Proceedings, pp. 306–318, 2017.

[192] A. A. Bian, K. Y. Levy, A. Krause, and J. M. Buhmann, “Non-monotone continuous dr-submodular maximization: Structure and algorithms,” in Advances in Neural InformationProcessing Systems 30: Annual Conference on Neural Information Processing Systems 2017,4-9 December 2017, Long Beach, CA, USA, pp. 486–496, 2017.

[193] N. S. Gorbach, S. Bauer, and J. M. Buhmann, “Scalable variational inference for dynamicalsystems,” in Advances in Neural Information Processing Systems 30: Annual Conference onNeural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA,pp. 4809–4818, 2017.

[194] S. Bauer, N. S. Gorbach, D. Miladinovic, and J. M. Buhmann, “Efficient and flexible inferencefor stochastic systems,” in Advances in Neural Information Processing Systems 30: AnnualConference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach,CA, USA, pp. 6991–7001, 2017.

[195] V. Wegmayr, S. Aitharaju, and J. M. Buhmann, “Classification of brain MRI with big dataand deep 3d convolutional neural networks,” in Medical Imaging 2018: Computer-Aided Di-agnosis, Houston, Texas, USA, 10-15 February 2018, p. 105751S, 2018.

19